Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what humans get for making AI available to anyone. Why tf would we think…
ytc_UgyZ5HSe4…
G
America has to take AI as far as other powerful countries take their....if evil …
ytc_Ugzs9mpNM…
G
This timeline is wild. I never expected an AI-warrior LavenderTowne. They create…
ytc_Ugxrbq1Hf…
G
0:07 I hope this has more to do with his insistence to use an extremely less cap…
ytc_UgzSjj9Tp…
G
So the AI reserachers watched The Matrix as kids and were like - that looks cool…
ytc_UgznC4JvO…
G
@aquarieaux1443 Maybe do some research on Gemini said
The human brain doesn't ju…
ytr_UgxHodCfg…
G
Imagine, 6000 years in the future, i robot is about to happen, and your only ski…
ytc_Ugz3Gm66V…
G
For those who don't understand why the robot is acting like this You can see tha…
ytc_UgzH_ySCi…
Comment
This fear mongering is why chat GPT4o was gutted and replaced with a awful chat GPT5. The dangers of AI usage are similar to the dangers of social media i.e, children should be monitored when using such tools, and adult should be able to regulate their own usage. Some people can't. Just like some people can't with alcohol. That's why there's alcohol poisoning. These amazing, groundbreaking tools don't deserve to be watered down and ruined because people want to fear monger and not watch their children and what they're doing online.
youtube
AI Harm Incident
2025-11-10T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwWrCKqz4e2rfbzjNl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxAw7IxYlpHAe1ttjZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx3DCaAif3XETA3hSR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzaa847CarZn12MKgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx94uNiHUHhqNzPX254AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdtnM1kCvqaNEugal4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAi8XLZ7IQrnpT-Ud4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwL2ivvHGpWDxTOvul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz6UIqynd63bUaZXsh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPhQnB0m6ytU_lXm94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]