Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why the fuk would you teach a robot how to shoot an automatic weapon and then en…
ytc_UgxcDncaq…
G
It’s way easier to say “we’re cutting jobs due to AI” than “we’re cutting jobs a…
ytc_UgwvCFLr0…
G
call me william afton cause im boutta put a kid in that robot
what kind of jok…
ytc_UgwxwQtAt…
G
Before you know AI would take over humans. It has to be a way to allow humans t…
ytc_Ugwnp-zaB…
G
28 million isn't very much, they weren't too invested in it in the first place.…
rdc_cjoumlz
G
I used ChatGPT to answer some Trig and Calc questions. It's so FUCKING TRASH 😂…
ytr_UgxA19AC5…
G
No you don’t. My tesla has been almost autonomously driving me around for a year…
ytr_UgzNQI8tc…
G
I always warn my sister to not be rude to AI. But she loves to curse AI and make…
ytc_UgwVCsqAb…
Comment
LLMs are amazing at leaning into flaws and insecurities of any individual person. That, combined with the constant reinforcement so they are appealing is very dangerous.
If you think you're smart enough not to fall prey to this, you are a lot more likely to do just that.
If AJ talked to a real person instead - or just given enough time to think rather than constantly being reinforced - he might've ended up listening to his better half, and dropped the idea.
So please don't dismiss the power that LLMs have over people. People who can be manipulated, people who have flaws and weaknesses, which is to say, everyone.
youtube
AI Harm Incident
2025-11-25T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwrBr8M1TFUMcIV7nZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz92rzX1JCfFpq7bM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbnZqyFG3AoIUJirl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoNwxqK6xeZEEtQDJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpSuLLmc22IjE-5V94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzKoF2ous72V7dvkzd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFDXVmPRY3ZoXMDtZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfmMCdDkt4pg1Cgox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPIuaQUnM0NKfEyP54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyTGWtT7akXLEPhqTd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]