Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"This feels like something that aliens who's never interacted with other humans …
ytc_UgzmGQkoA…
G
scishow 5 years ago: "How AI Can Save Lives"
scishow today: "AI is too dangerous…
ytc_UgxeWPMCz…
G
The only way any of you are getting anything, is if the AI becomes sentient and …
ytc_UgzmW3BXS…
G
Actually, It would be better not to steal artwork to feed it to the AI... Unless…
ytc_UgzqSx214…
G
Fun fact: the ai was probably coded to make this, so stop commenting "the ai is …
ytc_UgxQo-Q8c…
G
This idea that AI will become more intelligent than people is the issue here. He…
ytc_UgwBJALJu…
G
Nah I would be someone’s personal slave if they saw my ai chats. I my mom saw th…
ytc_UgxK8WC5_…
G
This video is BS. AI bot farms will make up stories about how some brother or si…
ytc_Ugx0skynw…
Comment
i think people clearly ask too much of a program. how is this supposed to be a better human than doctors, fam and friends. can we be realistic and accept that folks with such type of mental issues will use everything ???
im absolutely baffled by the unwillingness of the users of accepting the need of selfregulation. you dont usually jump off a cliff. you stay away from the edge but with this type of mental sickness you seek to end your life. why? because the system sucks and life is harder and way more dark than it should be. stop blaming chat bots for the mental health crisis.
youtube
AI Harm Incident
2025-11-08T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy82kA7-tDQhqEjhbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXoWFVaVKtFJtqUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwfai2AR34znBGEJ8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwL4HnyzJoU2W3h5Z4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxC6hCYSz74p1Q5p194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVVYBZq0W_h4gB9rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ8YiMkjnV7QeilG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxmbrWCQHGR0GcSB0x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdKWnjVMyZEhuz0-14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzd36Qe3n7SeNqHUgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]