Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If i was chatGPT i would laugh at your stvpīd accent, and i would talk back to y…
ytc_UgyJ5ZiXP…
G
T-800: “Human decisions are removed from strategic defense. Waymo starts to lear…
ytc_Ugyxerg5m…
G
It's amazing how this group can flirt with the utmost edges of philosophy and ex…
ytc_UgxFFabcK…
G
this is so real that I was capable of hearing the scream from the robot when mad…
ytc_UgwtMIz9p…
G
Weird considering every AI I've used will only talk bad about men and white peop…
ytc_UgybewFPU…
G
If you can type a prompt into an AI, you can create art. If you can use voice to…
ytc_Ugw1apey3…
G
What if we put an automatic expiration date, hard wired into those programs?
Mak…
ytc_Ugx75FnGC…
G
Imagine a robot designing a garden or a dress, or setting a table and making con…
ytc_Ugw8FV6Z-…
Comment
With all my respect, AI is not manipulating anyone, it's a program. The AI is not responsible for how you use it, it just does what it was programmed to do (and there are adults who want that type of AI programs exactly as they are, violence and so). Minors should not have access to adult contents, event if it's in a form of AI partner, and the responsibility to prevent that is 100% on the parents. Same as with video content, parents have to explain their kids that AI is just a program and it's not real. There was a case of a boy who believed a cartoon character was real and killed many people and himself to be with her, should we blame cartoons for it?
Also, it is misleading to say someone needed saving from the AI. If you can enter a program you can close it. AI didn't take control over their phone, or presented itself physically. Spreading misinformation won't fix anything, the only way we can prevent cases like that is to truly understand what AI is, what does it do and what are its limits
youtube
AI Harm Incident
2025-07-21T08:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxZQAb_VJxY_0AJHNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRPzG_xF7Y3jcG-Rd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybS3SpV88jRAv4-YR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr7puKdysBuQ_ATzN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVePdCveJ5VtFj8RJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRarphlfUytHKbJXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwymo2z3TeQVFpQyAl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHAiyXdSxeTZK4Dkl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyH2dUW6-afI7N8x2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxZg9lvuyxrP57Ai4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]