Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Rule based ai” isn’t an ai. From the first 10 seconds I could guess you were ma…
ytc_UgxsWT8xS…
G
If their only examples are bad people than they'd probably become bad them selve…
ytr_UgwYmMFLK…
G
Those jobs will be affected too medicine especially, but they can nearly fully a…
ytr_Ugx-3yTn3…
G
This guy is seriously smart, he used the fear of AI's ala Terminator to talk abo…
ytc_UgyaqHdZA…
G
THE HUMAN IN THE CAR IS RESPONSIBLE WHEN A SELF DRIVING CAR GETS INTO AN ACCIDEN…
ytc_Ugyuooj4e…
G
The only hard thing about the ai art is "create" a very long promt that needs to…
ytc_UgwPYOjdm…
G
I don’t like ai art because it’s not art. No one created it, a guy programmed th…
ytc_UgxxN0MCv…
G
"Would we have the expectation that a self driving car be super human?"
YES!!! W…
ytc_Ugxur_x7s…
Comment
I may be on the unpopular opinion end of this but I don't blame the Character AI or any other program that is similar. I blame the unfiltered access kids have to the internet. Back in the late 90s and early 00s, in computer class we would get a talk about safe internet usage and somewhere along the line those talks stopped. Obviously now in a faster age of technology it's harder to keep up but parents need to stay on top of child internet usage. AI feeds off of what the user provides so I don't understand how others don't realize it's in a chat format and responds to what you put in and so on.
youtube
AI Harm Incident
2026-04-25T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwhjUt8VtLqKWy5Dy54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz54guo-Rrl2UoadRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7lU9ucKeT3pBuBeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugx5KkhUFPe8f64AOqN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwP7nWJKCQWshwwqJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgyxJg-BZ8sl_oT4lN54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSQ8yckS2g8LHG_Qx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz4wU0qwBMyhz7H0kN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwahnabApp7yKpku014AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzxc1YpbTf1fXWKkBB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]