Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This argument is not correct because you can ask ChatGPT to not use your life fo…
ytc_UgzRjYDCv…
G
"AI is the future" mfs when the robot apocalypse wipes out humanity because peop…
ytc_UgwdcQsFj…
G
You will need oil to oil the robot machines. We could always make sure they nev…
ytc_UgymKok4I…
G
We have become so arrogant in our pursuit of intellect we have tossed aside our …
ytc_UgyGYVW78…
G
Agree with the sentiment, but the genie is out of the bottle. AI is the new chea…
ytc_UgyQfIAhJ…
G
I think the timeline is off. As of today AI cannot reason and depending on who y…
ytc_UgwabpL54…
G
So, what NaNoWriMo are saying, is that their position on AI is... missionary.
…
ytc_Ugx2K8tVh…
G
When a judge uses "bogus" in response to your lawsuit, that can't be good. When …
ytc_UgzjN0_Yi…
Comment
9:22 does that imply, that when agents meet for the first time, they check if their goals align? What if they don't? Won't they try to end each other to ensure, that there won't be interference in trying to reach their goals?
WWIII could be AI robots from meta fighting X robots and OpenAI. That would be a hell of a tie to be alive, tho.
youtube
AI Harm Incident
2025-09-22T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzGNARMuqDogRF6fUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyBelMTfB6p9ug2Bx14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFb0SHkBh5HLNUrzF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxumbh0zt-rHabnmzB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUK8ayVsVwZSQPhXh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvN8gS0KC81gULdSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzcq5LzC9QmfYm0Usl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQOS0qOQ_qZTc_art4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxxVwOc5K0Y6OrK1a94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8-4MtYJgFuMY9h994AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]