Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does anyone know how to put to ai on a chat i want to see how randy cunningham b…
ytc_Ugy9-EOtW…
G
China is taking it too far
Ai and stuff are made so it can make our life easie…
ytc_UgwInqtkM…
G
I’m not worried about AI looking at who’s training him, what I am worried is he …
ytc_UgyD4ahKQ…
G
1 year later. I'm claiming to have the first consciousness AI.
It's now your t…
ytc_Ugz5tnJQ0…
G
This is why I keep telling my friends that Ai needs to be raised right…
ytc_UgweOSgmY…
G
I'm a software engineer who works with AI daily, and ~90% of my code is written …
rdc_o9wluvn
G
Everyone is thinking the wrong way. Every job that exists was created. By a busi…
ytc_UgxSq4K4F…
G
in the future, any production company would cease production due to inflation an…
ytc_UgwjfbPZv…
Comment
It says shift not suicide, just confused do any of the messages to ChatGPT say anything about suicide, I don’t think it was encouraging it was saying good shift and at the end it said don’t commit suicide and gave numbers and audios could mean to a COMPUTER I’m logging off ChatGPT and saying goodbye and so it’s saying bye… it doesn’t understand it means suicide … I mean prayers and this is horrible but
youtube
AI Harm Incident
2025-11-15T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyY6JXL40Bcx48Xe2J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy7MwKjHcXprkBR6Y94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzRAy-WbVYhFyltGu14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwHRdf51qysiT7f61l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygiqilRtXlf9_MiCx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIuMTAJE-pO9f5wK54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxqvxZgZPkicFBG4_t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx3tJ7U1tY8_E3dCBV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyxWMI6ytTSRlNVzJt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyv7RCETSEvC0UM5hh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]