Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They should sue. Clearly these machines were made in their likeness. There shou…
ytc_Ugz-_hTI-…
G
I was quite impressed by a recent paper on "gradual disempowerment" (see link in…
rdc_mbv3nux
G
"If you find the AI starting to act up and disobey, please unplug its power cord…
ytc_UgxuY81-L…
G
Every time an AI asks me to say something about myself I just tell a story of ho…
ytc_UgzqH6COf…
G
You have a fundamental misunderstanding of how LLMs work, they don’t train on pr…
ytc_UgyAnFpko…
G
The Google AI is utterly mechanical and stupid and boring and definitely nothing…
ytc_UgyVh8yMD…
G
this old dude is here on youtube talking to us like we are slaves and we should …
ytc_UgyXgJtK-…
G
I don’t hate AI I think it’s really impressive but I hate the people who do bad …
ytc_UgyI4wrur…
Comment
would it be possible to make AI just as unsure of whether or not its outside of its test environment? Or hopefully it's there by default. So even if it were deceptive.. it always plays nice in order to not risk being turned off for bad behavior.
As an aside... that might make finding out if this is in fact a simulation... one of its core side goals
youtube
2024-12-28T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz--w5v9NLFuI0HLNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyK9PqiG93vC5Z5qgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5PNBsAB935H-5Fh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHfOPWI1WN22d0AvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyxhagj0nv-U8MyCTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFrqnZ7sdgoG3bF4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwqblHF83JZ5MCWvR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxIiyW4_QdAqW7rdNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdEojchf_Bj2bqH9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzawtYWlWT1Z9p0sYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]