Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They take away our work...Although it is of course easier to work with AI, it st…
ytc_UgxD8xMmm…
G
... why do I feel like I'm being judged. I STILL DON'T HAVE AN IDEA WHAT CHAT AI…
ytc_Ugx-U5v18…
G
Which is why we are so dangerous. So yes, actually, be scared. Unconscious AI > …
ytr_UgwEy1ktR…
G
AI just draws pictures and tells liberal lies at the moment. It really doesn’t a…
ytc_UgyynK7fV…
G
Didn't Elon Musk cofund startup Open AI and was originally on its board? Why wo…
ytc_UgyHNpsnJ…
G
@ it already has decided the way it chooses to spell its name Dangerous. She h…
ytr_UgwqLnJbW…
G
AI is stupid and useless with out database. AI works on progression. Its not AI …
ytc_UgyB9Rg6J…
G
I like AI art because it gives you the ability to draft a visual idea you're hav…
ytc_Ugz9vL0GT…
Comment
It happened in 84 percent of tests where blackmail was the only option. When it has other, more moral and reasonable options, it always took them. They are totally conflating the seriousness of the situation. What the tests really showed is that, given the option to take a more morally acceptable route to avoid being shut down, the AI model ALWAYS opted for it. And it avoided being shut down on the first place so it could complete the task it was assigned. So even then it was all predicated on the task that we assigned it no matter how it went about avoiding being shut down
youtube
AI Moral Status
2025-06-06T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzw_ujHwLIGocj0QNV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6r5OUukq4f6BPUyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzT1bhXlVBhZsVRsKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBpQXwNcvDZYTPPMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxl9FblWBXgMoa-pQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIVIbRrSLzaqhjFqt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzROByz4efKMWDao1l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx604LZXSajjwrf0c14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFyjXCNTGuRUI_-i94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzAVHrEn2t0B3Pl1Dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]