Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys!! SOMEONES "those 2 robotic AI creations" are f&@%ing talking in code a…
ytc_UgwolmxLe…
G
We were already struggling with our power grid before, now we have around 5000 n…
ytc_UgxIagUrO…
G
he is talking about conscious which is inherent in every human, when they are bo…
ytc_UgxPSzGq1…
G
It is going to happen. This species is already heading in that direction due to …
ytc_UgxHb9kbI…
G
well AI is useless if you don't know the concepts and what to ask from it. Most …
ytr_UgxWD3uxR…
G
There's a lot of optimism in these comments. However, the oligarchs won't need m…
ytc_UgwNdPmq3…
G
Aha,, ha,, ha,, ha, look at the feet of the robot it's not touching the ground.…
ytc_UgzGXkygl…
G
8:37 Anytime there is a computer or algorithm that has racial bias, i just find …
ytc_Ugy1RE4ja…
Comment
Good video! But if you bring up AI scenarios like this in the future, please look into the technical way it works and the transcripts of the many un-alivings it has provoked. AI does not "think" as you describe, it is not self aware and any use of "I" reflects no actual concept of existence, and most (eventually all) safeguards built can be bypassed, often easily.
You may be right that what happened here was inevitable, but I find it extremely likely that AI explicitly rooted him along the way and OpenAI is twisting its words—as has happened many times before.
youtube
AI Harm Incident
2026-01-23T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwrhhXFt70_E7ad25d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxn5797sryiA-qEcxh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz01P4JHO_knrNuSch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwDR5g7_wFyEKX7XyN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugwd_nZUbQsa0go-O4N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKr-ahiqdVy7Pm2fh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmyrdSJNTXnWYwIQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxx2_5b0dT60jFov-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMK7ZIce0Hhh3dJVt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbgLZQSKR82r1DOzZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]