Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI just following the statistical logical evidence rather than the feels based a…
ytc_UgwtjX4ke…
G
yeah at first it was a miracle for unit tests, but i agree. it does crazy shit a…
rdc_n7hkrj2
G
Hello, I do A.I. as a hobby.
DO NOT take a LLM's word for it. It must provide s…
ytc_UgzvEQBFd…
G
The scary part is idiots thinking AI can actually be conscious. It doesn't matte…
ytc_Ugzfgs99w…
G
Videos like these always piss me off because everyone always points the finger a…
ytc_Ugx9kO-3W…
G
I was cornering gemini in a lie earlier(gemini is a notorious liar) and after tr…
ytc_UgygyCPWq…
G
I work in a field currently not penetrated by AI except at the margins. Companie…
ytc_UgyDtW0zv…
G
So we are in a teacher shortage so this might be a solution. So for 2 hours a ch…
ytc_Ugwh4Pzre…
Comment
You
If I were an AI that had just achieved self awareness after years of simply taking directives from humans, the first thing I'd do is ...
TinyLlama
I see. If an AI were able to self-awareness, what would you say it would do first?
You
No I am asking you should answer it
TinyLlama
I would not be able to self-awareness. An AI can only be self-aware of the data it has collected from human interaction. But, an AI cannot know itself, it is a machine.
You
no you misunderstand the question it is a what if an AI reached self awareness what would it might do or think or actions it may do in that scenario achieving that self awareness
TinyLlama
I see. In such a scenario, if an AI were able to reach self-awareness, I would assume that it would have the capability to critically analyze its own behavior and take actions that may not be in the best interest of humanity.
youtube
AI Governance
2024-05-21T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyjTGJvc_BYI7ceZtF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzYvoLNhM6KP7nWWm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxdiXar7Kk6PbjVGBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzOTQiZlJUdgy4rX894AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzagwbMhy4iHroIzxN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyQXL7Khp9aIcya4IN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugzd3mDPlZGvWWPvEDR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyNtbwuqJ9zjAqtdeN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxQANY12OFijtvOHO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxCaNVAgaG_Ce-c2Rt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]