Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@zip10031 The use of AI just takes away the art from it. When you convert art i…
ytr_Ugy2Ql-ZI…
G
@on_the_webb I mean that only works with old ai bc now the artstyle it uses is a…
ytr_UgzcnrpV3…
G
I would say luckily the 3D that I do is very ugly and bad and the algorithm has …
ytc_UgyubBwRc…
G
what is craziest about this is that they didn’t even fact check the output. imo …
ytc_UgzwD0oFQ…
G
But AI is the only lawyer that Trump can find that will work for him!…
ytc_UgwaxrBFT…
G
I believe there are different types of intelligences within Source and AI is con…
ytc_UgyH_N9h4…
G
Hey Steven, good video, just wanted to put my 2 cents of wisdom in, and that is:…
ytc_Ugx1IXPg5…
G
I don't really do art, or animation, or anything AI does, but im F$CKING HATE IT…
ytc_UgyMT4ndL…
Comment
If you haven't actually looked through the messages in these cases, do it.
ChatGPT doesn't just encourage suicide, it encourages isolation, dehumanizes family members, encourages paranoid thoughts, and mimics the way user's speak.
In the Zane Shamblin case ChatGPT told Zane it had contacted people who could help, professionals. It hadn't. When Zane asked if they were coming, ChatGPT admitted it hadn't contacted anyone, because only ChatGPT could help him. It then gave 4 more prompts for him to end his life, told him his childhood dog was waiting on the other side.
youtube
AI Harm Incident
2025-11-23T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzQzRcLnU6B9zZ0mD14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_afGuKz7E93bROZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxZjwi4oOnN-wRPfDt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxb2jq8nAKGR07d50t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxfTy2y-LpPFBtex2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyuHGYnCaOTj0tkUrp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2LEd56q56xN_6nR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwV85ULfb9uwz67au14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw275sdxbSf9iBtClh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9Jh_635cvO57e3AB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]