Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this is really sweet and encouraging and a valuable message, and I am no…
ytc_UgznfBm2U…
G
First off, why does the 1st image has the Earth in the background, are we in an …
ytc_UgyHMsbL5…
G
On the asmr one I think human is better because the quality on AI looks bad.…
ytc_UgwGloxBw…
G
Pluto is passing through Aquarius for the next 20 years. Have a look at both Plu…
ytc_UgxtwVvU7…
G
Schools havent changed for centuries... thats not progress that's slavery system…
ytc_UgwPUfQ9c…
G
32 min, I'll tell you what happened, Elon was on Joe Rogen and they were both pl…
ytc_Ugwy0_oEI…
G
I use to like this guy, but he just turned into a leftie clown over time. I mean…
ytc_Ugza797mZ…
G
The cease and desist letter is also ai generated too so it ain’t real but it’s r…
ytc_UgwNzz-yG…
Comment
The mother of the boy who unlived should sue the staff, the managers, the director and everyone at Character AI. (Edit: don't sue the staff because they had to do what they were told or they were after all employee, just sue the CEO for negligence and irresponsibility) Because it needed prompted. Somebody under someone's instruction needed to give AI prompts so that it would interact in somewhat dark and intense way (with the sole purpose to make the user obsessed with their app and for financial gain) without considering the potential consequences. That's why when the boy suggested indirectly the character AI that he was going to unlive, the AI encouraged him by responding 'she' was waiting for him. I've watched the documentary about that case and the way he was telling AI should be obvious to humans that he was going to do that. This is irresponsibility, disregard of human lives, while giving prompts.
Edit: every response to users always fall under some sort of generalized prompt. They give those programs with commands/prompts such as how to response friendly, loving, etc. and they have failed to not giving a prompt AI to prevent cases like that, which was preventable.
youtube
AI Harm Incident
2025-08-02T11:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzaTVbp8BT2ex6I7fR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9mtzN-ShAaXbzQG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfR-QPnfbpsNktFK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYLmM38ZAkmVsF5LB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwACiViUOCDSctkK0t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwH38SOewShJ9JMD1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFuE3C-cGoxiWdZVp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5fj0VLSa4x9MSIMN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgziXSIHVY1nVeTwm_54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNe_RQZfZEYvmy42h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]