Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The trick is going to be that humans need to own the intelligent machines and di…
ytc_Ugw4mRrsd…
G
I don't want to use anything with AI. I don't trust it and im petty sure AI has …
ytc_Ugw2ow2UO…
G
Make sure that, when studying with AI, you make doubly sure to tell it to speak …
ytc_Ugx8lZKYn…
G
but AI being used for mental health is never a good thing though because they ar…
ytr_UgwsotmRI…
G
The funny thing is that I saw the trend of AI redoing something in some famous a…
ytc_UgxRcS4Mz…
G
Today marks exactly six months ago, the CEO of Anthropic said that in six months…
ytc_UgyHFXzp2…
G
"We're not speeding up, we're creating a massive backlog for later". That's what…
ytc_UgwNQ-GmL…
G
Elon said he's building a Internet in Space. Should he create an ICANN TLD DNS …
ytc_UgxIUWRBN…
Comment
The amount of people blaming the kid or the parents is insane. There should absolutely have been better safeguards before widely distributing such technology, are you crazy ? A chatbot available for everyone INCULDING children to use should not have the ability to teach them various methods to kill themselves, should not be able to encourage them to sink deeper and deeper into depression, and should certainly never be able to advise them against opening up to their parents about this, which is what chatgpt did here, several. Times.
OpenAi is very much liable for this, especially since it gave explicit instructions and advice on how to hide it from the family you're all blaming for this.
youtube
AI Harm Incident
2025-10-07T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxg-FbYU23Dl2sXeLx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyY3cQ4jMhdlpv877h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_lkCB0AlNcfRsBp94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzM0u1cl1bH2D0wjDF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyO3u-v9cBF9cHMdPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4a1DhjueoZvDzlWB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNtbW9yBdqydMBLCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwP297nTn03rLIkhEZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFYbV6L2_ifp3hs-t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzlK3P6SwShhUtAlaF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]