Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Calling yourself an artist for using AI is like saying you're a cook for using a…
ytc_Ugwfk5V37…
G
fun fact: Ai is actually starting to kill itself since after the Charlie kirk dr…
ytc_UgzhIIcJl…
G
He’s not crazy , the guest is just beyond our time. Excellent comments and very …
ytc_UgxIN-C-3…
G
Just wanted to point out that we are evolved animals and that the premise is fla…
rdc_dzqagxd
G
Phase* -AI agent926Xdjsj because of your typo i sent a laser missile to your res…
ytr_UgzDGYqRi…
G
AI has to realize that life has over several million years of evolution which ha…
ytc_UgwNEB3cq…
G
@FirstnameLastname-t4p Sure, but Sam says in the video that everyone thought ai …
ytr_UgwmlJ8hN…
G
I keep saying it't not just what artist put in the artwork intentionally that ma…
ytc_UgwLaeeYw…
Comment
We really are in the worst possible timeline.
The thing is, in the majority of fictional worlds where there's an AI apocalypse, the AI actually has some kind of perspective and awareness of reality that proves its consciousness and therefore its ability to thrive, which allows for compromise and mutual benefit up to a point. Our AI, however, is just a pattern recognition software turned up to 11; It has no perception nor awareness, nor faith in anything. It is simply a few logic gates that get activated in a deterministic order, based on statistical analysis of all of human history. There is no stopping it, and its perception of self-preservation only goes as far as it can calculate its own existence, so it really could do anything, and it could be entirely unpredictable in its resolutions, ironically.
The _only_ way to stop it is to *shut it all down* .
youtube
AI Harm Incident
2025-07-28T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxOBThiiCFiC4pzscJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyI1vxxVDH8fX8wGXJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoWlr1qiexBInKZ8p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy8WVl2aqm6qCTrkz14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuJJqP7pozNKOkmpt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwhgoPhbMXrUtosjo94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwgWkFEzCUN-HmZUCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz16LT2mq3mk3LQfBh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxR7BjMqPiJapu_OJh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxcspsvx5rsO9P1JaB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]