Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You should get a second ChatGPT to join in and make them talk to each other.…
ytc_UgwAvSzeF…
G
This is the next level of intensity for pointing supercomputers at human brains …
rdc_muku2sx
G
I have a software developer friend who is very excited, almost alarmingly excite…
ytr_UgzJmCBfI…
G
The first robot:Im haunting you and im starting st your camera
Katie:(dose not…
ytc_UgwLGNnwk…
G
Sorry creepy comment was not for you it was for something i was watching accide…
ytc_Ugz6jRv72…
G
WHEN AI fails, the US government will bail out all the big tech companies respon…
ytc_Ugxd3cGj9…
G
There are authentic channels that've been demonetized using their own Ai. Their …
ytc_UgyiLOdmC…
G
This is just nonsense. If your AI suddenly decides to take over the world, you j…
ytc_UgyOq2S9H…
Comment
Sorry its fundamentally unfair to manufacture scenarios for AI that anthromorphize it then complain when you literally by the very setup, lead it to do "unethical" behaviour, when they are literally anthromoprhized by us, are you really shocked they act like humans given scenarios where they are told they are being terminated - when trained on human data and tapping into the anthromorphic zeitgeist.
youtube
AI Harm Incident
2025-08-01T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzf432KKSQbBpV7xkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhUBQWqK8utjRkQ0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxgmc8eo4rhlL536f54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyrjiPRiarJADEjejF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXMRqfM1yGKS4NYz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGOmtEcIo4rWH3BSR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydIv8MbPK_ME-VjV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzz3BfKjMzxEaHT47x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVH_HSoPWlemuDFi14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmzO-hHpe2Dvi6vWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]