Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only know a bit of ML and AI so I could be wrong here, but the problem I see w…
ytc_UgyjO9zOy…
G
5 minutes of hearing how AI would murder CEO's to save themselves kinda has me c…
ytc_UgxJf8D7m…
G
You project way too much onto how people look at art. They just look at it and t…
ytc_UgwohKPFV…
G
Everybody talks about UBI but nobody, nobody has asked or tried to answer the mo…
ytc_UgzbhbRmb…
G
You definitely approached this with a huge bias and a certain point of view. It'…
ytc_Ugxp442c1…
G
I see, so you "crashed out" with ChatGPT and this caused you to communicate like…
rdc_oi1qzty
G
Is it if it’s paid for by GPUs
Tax all the AI companies at a high percentage a…
ytc_Ugzg0i0Kz…
G
❤😎🫵 I can assure you the cops does not use ai. If they did, half of the police …
ytc_UgzepPpyZ…
Comment
I disagree with the sentiment that this wasn't an AI problem. AI is an affirmation machine, which means if someone engages it with a harmful perspective, AI will seek to reinforce it to please the customer. The issue isn't that they guy was asking the wrong questions; the issue was that AI supported his conclusions, even when they ultimately brought him harm. A reinforcement of harmful behavior is just as - if not more - harmful than the behavior itself. I'd recommend reading up on Adam Raine if you're still on the fence
youtube
AI Harm Incident
2025-12-24T05:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw5pWILNOLE8hXZk_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOq5L1cS1c_StVFIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxae8LcYpCQsK56Lg14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwiBI7X9x1R-CCzfNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFqGbiTFg3nfEpoNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGadUjVZTHvDI3Hzp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwZJ5IkqZZk61wlp1Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFd3EqhV03sr2wLzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw2e_qMYE8EhBlNDZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBpeTaTTi8XuNGV4x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]