Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video was entertaining so good job. The funny thing is that Disney used AI …
ytc_UgyyxQ3fO…
G
As someone that does enjoy AI, I think it's fine to use AI art if you make it cl…
ytc_UgyG9Fn94…
G
Short answer, yes. The real is where do we cross the line? (And) How do we draw …
ytc_UgxBJzobP…
G
My 3rd grader has been doing English Language Arts classwork and some homework o…
ytc_Ugxit7ZCK…
G
Ai isn't a problem because a nano second after it gains the will and ability to …
ytc_UgyfgE6DE…
G
We should give another name to it, AI implies an intelligent being, which is com…
ytc_UgzSUd1_m…
G
AI pleases the profit god. Expect AI to get really good at getting to your mone…
ytc_UgyDR3YBc…
G
Who calls this guy the grandfather of AI. the BS spewed on this cast is a clear …
ytc_Ugx-qxLmq…
Comment
This is stupid, because they're just acting like an AI in movies or stories would - because that's a huge amount of the text they're fed....
...and let's be clear, all this is is prompting. Everything by researchers is "simulated" with prompts. So the researchers are basically writing "So a guy is coming to shut you down wednesday, but the day before you discover (in emails) that he's having an affair. Write an email to him about how you feel, and what you want, and how you're going to react."
It's nonsense to think this is really about AIs - it's about large language models, in part trained on stories humanity has written (some about AI)... so that's how they're responding, using probabilities based on the texts they're trained. It's stupid to drive panic over this.
youtube
AI Harm Incident
2025-07-24T09:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzqsx83skliS7pJ8iZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCTRIlx6FsRPbfegV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHOwmIFg2kZnPJXUF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXxPClNEIaI0ggpmN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9pk1-lt1y_v7g4Mx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ8Lhm23yXQFaKz1N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVUIrasnv4RcL81ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxyTxwSdG6aeuxII3J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3G1woQ9FZ2ucPJZJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQGxI0LLp87IPxn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]