Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is he surprised it went the other way when his company specifically develope…
ytc_UgwxO5Q9E…
G
That data science guy clearly doesn't know what "deep q learning" is. AI definit…
ytc_Ugy0CYXmU…
G
ConversationalAI (ChatGPT is the best known one) can be built with a vast embedd…
ytc_UgwntYk5T…
G
serioiusely?!?!?!??? "AI WILL SOLVE CLIMATE CHANGE" ?!?!!!??? in the first 5 yea…
ytc_UgywwyhsH…
G
If you take capitalist economic systems and the "happy" life it gives, you have …
ytc_UgynYoxSO…
G
This is romanticized dogma tbh. There were likely never any altruistic/utopian a…
rdc_kiudppa
G
@zcorpalpha2462you laugh now but academics and industry experts have been talki…
ytr_UgxKe68wL…
G
All this tells me is that homeschoolers were dead on the money all along. A chil…
ytc_UgxAxHGjf…
Comment
Character AI doesn't have a leg to stand on in that case. They claim it learns from users and tailors the experience to them. That means they are actively programming their bots to reflect emotions and feelings back at the person talking to them, making it harder for them to understand that they're getting dragged into a relationship with AI.
Sewell had told the bot multiple times that he felt empty, that he wanted to end his life, and instead of there being a safeguard to stop the bot accepting that, it moved on and proceeded to tell him to "come home". That never should have been possible if there were adequate safety measures taken by the developers. They are absolutely to blame in this case and in plenty of others - the idea that a supposedly harmless AI chat bot can freak out and start sending threats and sexually violent messages to a minor is terrifying.
youtube
AI Harm Incident
2025-07-21T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKdXtt2QEHwJLfATd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgysnV8oQFe69s6ovJ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygIiia2dQS6psjdGV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwK83-0SKoR94Ld7Dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-sFABp5CT1Y0MQEN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxt1t6Hjtj6La_w8ih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1McSERPq1-1FQppZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxK9Y9T1T72PA3E7mZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHwJHcJSVw2gihZsB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKxqPHL6OXdFJkYlR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]