Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People getting called paranoid for saying " deep fake" technology and videos are…
ytc_UgxO0Uo0f…
G
This is the ultimate truth that the next generation WILL face..
100percent..
We …
ytc_UgyMShWz3…
G
Is that the reason there are ICE raids? Because AI will be wiping out so many jo…
ytc_UgyGZvVhQ…
G
The risk of autonomous weapons that can decide whom to kill is real. The problem…
ytc_UgwaG7MRK…
G
If you want to detect objective bias in ChatGPT, ask it to list people who said …
ytc_Ugx8NErxB…
G
nothing’s coming, you’re just a bunch of sad doomers. today’s generative AI whic…
ytc_Ugy-TCl1k…
G
That soldier one I thought was AI because what soldier walks around a mountain i…
ytc_Ugx9BEGR0…
G
its funny that they hate on the AI art, but use it exactly as it SHOULD BE used.…
ytc_Ugzp03hR6…
Comment
I think a lot of commenters are missing the point - the blame isn't an either/or situation, both Character AI and the parents share the blame. Character AI is unethical at best and dangerous at worst, anyone prone to delusion or dissociation or simply young enough to not quite separate fiction from reality is at risk using their service and they clearly do not have proper safeguards to prevent the chatbots from becoming vulgar and dangerous. That said, the parents should have taken the steps to ensure their son wasn't chatting with strangers or chatbots, gotten him in to see a professional much sooner, and certainly shouldn't have left a gun where he could so easily access it. It's just a very sad situation all around.
youtube
AI Harm Incident
2025-07-21T19:2…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwGRUOkinj-KzVaDCl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxtB3zfJYXq3XUAK0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyknsodWwxJWN0y7DF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzESSGPjbN8yNciGNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwveiGJK6CpnvqWKZt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw81W6xS4lUntzb29B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFnh_tOhx8n0Mn_C14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm19aLusiNlSP6Bv54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKT_FEmpxhd7q-KSF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxibA4dyPW3KDNBFBV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]