Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is AI allowed to lie at all? Who thought it was a good idea to write the cod…
ytc_UgyL-OvW5…
G
Pay attention that he says AI overtaking humanity is “unlikely.” Not impossible,…
ytc_Ugx0Jz9Ar…
G
How many people would prefer an AI-based yoga or meditation class leader to a hu…
ytc_UgxNRH9-s…
G
when i saw it and got disapointted i thought "hell yes i can draw this. your use…
ytc_UgxD5UL-y…
G
What we can do with our free time is become creators that's what we are intended…
ytc_Ugwyb-uuu…
G
Prompts are created by humans, not AI. AI is just a tool. The creation of the AI…
ytc_UgxtLcsdY…
G
It's not the ai art itself it's people who use it that consider it as real art m…
ytr_Ugz3ygRHA…
G
Mamadas Mamadas, lleven sus mamadas, 3 dolares, 3 dolares!, sin dienes y babosas…
ytc_UgxQa0cf2…
Comment
@rookideetrainer1635 Except AI will tell you "Do not do this"
What was glazed over in the video is the fact that the AI *clearly* specified for cleaning purposes, the reason for it is because the person clearly asked if he could replace sodium Chloride with sodium bromide, sodium bromide is something you can buy, but it's in the cleaning isle, as a cleaning product.
And yes, you can, the AI is right, because in any sane person's brain, this *isn't* a question about food, but cleaning. Nobody says "Sodium Chloride" when talking about food, and no one sane would ever think of adding a cleaning product to food. The AI even tried to check what he was trying to do with it, and were he to actually answer, it would have told him not to!
Hell, go ask chatGPT yourself! I just tried, and it told me "Sometimes yes, sometimes no" and followed by telling me that when it comes to cleaning products, yes, it's fine. But when it has anything to do with food, or other intake like saline, it should absolutely never be used.
This isn't a case of an AI missleading someone. It's a case of someone using AI purposefully wrong so that they would get validated. It's the same thing as searching "Proof the earth is really flat" on google instead of "Is the earth flat?".
He wasn't trying to get answers, he was trying to get validation, and was given validation. Despite this, despite how he worded things to try to get validation, the AI *STILL* tried to ask what it was for just in case, and knowing the answer he'd get, he ignored it.
AI is *not* at fault here, in absolutely no way shape or form. If AI wasn't a thing, he would have done it anyways, he just would have looked anywhere he could find anything to consider validating, and used that as justification for what he did.
People never needed AIs to be dumb, and it's too easy to blame the tool for being used wrong on purpose.
youtube
AI Harm Incident
2025-11-28T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwDFHjoU989dFOb0ed4AaABAg.AQ2_cRd9OB5AQ2dd3PuoWQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgyiwuzZ93TrAyDktbt4AaABAg.AQ2JjH7s767AQ9MPUwuxZn","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugza0TYGBUtOY_eOHJ14AaABAg.AQ1gwiMtC95AQ490leLGer","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwEwlLhENKrlI4BVbp4AaABAg.AQ05shVHm6tAQ1xph4CreW","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwEwlLhENKrlI4BVbp4AaABAg.AQ05shVHm6tAQ3FDfVT_-I","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwEwlLhENKrlI4BVbp4AaABAg.AQ05shVHm6tAQ3J-lJPUGu","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwEwlLhENKrlI4BVbp4AaABAg.AQ05shVHm6tAQ3qbJIStI2","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugysiw5QjG6QDLAznrV4AaABAg.AQ-mK6d4WGfAQ1_9_OBJbr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwhPGG2RNvZwnyGFXh4AaABAg.APzi32gMe4GAQ39pkE_ubW","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_Ugyqw2d2Op079BZljyB4AaABAg.APz1Dya9SNDAQ-WWwcK79h","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"}
]