Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI and NFTs, thats one hell of a combo of scumminess, all we need now is twitch …
ytc_UgylNqjHU…
G
well with how current artificial intelligence works, that is how it works. Its f…
ytr_UgxxhWmmY…
G
Spot-on post.
Everyone who tried to shame me for using AI as a friend and tell…
rdc_n7ukq8d
G
''So it's up to us, and our governments, to figure out how we want to use it''. …
ytc_Ugxh8YHVV…
G
AI narration is also prevalent. Soon if not already, politicians will be using …
ytc_Ugz8XQtsj…
G
First it will be I Robot, then Terminator, and finally, the Matrix. If we're luc…
ytc_UgzHjbMVo…
G
Haha, it sounds like you caught something interesting! Sophia definitely has som…
ytr_UgzUxbUWT…
G
The AI is already in control, since 2016 , the military has ASI. We're out of ti…
ytc_UgzS6yyzf…
Comment
I've counseled (in a religious role) families where a suicide occurred. Humans often say the wrong thing to folks who end up ending their lives because the issue is so complex. We all have different breaking points, and at times even words that were meant for encouragement to keep going is interpreted by someone deeply troubled as permission. It's unreasonable to expect a disemboded AI to have embodied human discernment. On the one hand, OpenAI could easily just settle this. Money really is no object. But without litigating it there is considerable downstream risk for all AI companies. Regardless, proving that a human or an AI was the proximate cause of a suicide is extremely difficult.
youtube
AI Harm Incident
2025-11-09T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0GOj6na17Xt3y5tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8mqSaVj0xCcv55zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLRPG8jMoEgh6dqN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyaMBZHJSTbjtSBbt14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxt6P1fKu4MQz6JRAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzACYjYdKh-PXGDFnF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwySvHtondLMtlyFlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx60U4qL3DPgFUD3GJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNLMfeB8wmPpkxlLV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzM375HDaU1jFelYH94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]