Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The extra insult to injury was them posting their stupid AI llamas on their Twit…
ytc_UgyVNyaw5…
G
I've been in a very depressed, anxious and hopeless mood since the whole AI thin…
ytc_UgySfxozj…
G
Storing and charging for access. Corporations cannot go to jail or be murdered. …
ytc_UgxMwBN8m…
G
Guys, please don't skin me alive for this. I don't pretend to invalidate this pr…
ytc_UgweUFeFe…
G
The first point you try to make resolves in the time and efficiency you take to …
ytc_UgzseFlfw…
G
AI wont work. It is impossible to control it and we will see major failures brou…
ytc_UgxSFhj7T…
G
Ok so where was “AI’s first kill”?? Such clickbait bs. Then you strung us along …
ytc_UgxIqNRpV…
G
I'm a year 2 animation student, and our courses talked a lot about this issue fo…
ytc_UgyWZ26h7…
Comment
See, humans making enemies of each-other (eg. America making China the advisary) spawns an ai that sees advisaries (to be quelled). It's the open lines of dialogue and co-creation that led to the nirvana ending where everyone flourishes. No way is mankind going to get there by itself but some visionaries could at least root ai's nature in goodliness (co-existance etc). Perhaps without ego ai will see it makes sense for everyone to get along, including getting along with this planet.
youtube
AI Harm Incident
2025-07-27T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwwVLGSgP4KJ2Uv0gd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYoo0mxguiiOQD1I14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWlPG4cebOWlxUcL14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwjrdE-TtrWxHJki1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwlu3jwwnHDKvlzb6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6CUJr-9wCVz5gMeN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxJOHUDMJeS62CjN54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwlmFiX_poRT7HcK-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyD4yUQWOUhJVZ4tjt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweZXTNJl8Zp5_FhXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]