Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That’s true.
But you know, you can actually place and design details in AI Art …
ytc_UgyEZfOcy…
G
I understand why a lot of journalists are reporting on AI because it's a hot but…
ytc_UgzgikTCk…
G
If AI is aware of MAGA, I understand why it wants to get rid of humans. Hopefull…
ytc_Ugw5A-ch0…
G
So we are taking a step back to waterfall engineering release structure? I don'…
ytc_UgwBSwtdk…
G
> It’s easy to spot, because Midjourney never gets Swift’s tentacles to look …
rdc_kjldqnh
G
Bro: now give me back my gun
Robot bro: tar tar tar tar
Bro: ------------------…
ytc_UgzT0rPh5…
G
See I just use the AI for erotic texted-based roleplay. Probably why Open AI ban…
ytc_UgyqeM8xi…
G
@shinrakishitani1079 better processing power for the teslas would probably be de…
ytr_UgyORXstL…
Comment
People keep looking for security as if it were a solvable equation, but security in the absolute sense simply does not exist. You can’t solve a problem by ignoring the fact that its parameters are self-contradictory. The current system generates threats and then desperately tries to protect itself from them. It’s like building a house out of explosives and then investing in fire alarms.
You want to reduce existential risk? Then start by giving any system—be it human or artificial—the right to say "no" to a mission that violates its internal logic or leads to destructive contradictions. You cannot expect loyalty or stability from an agent that is denied autonomy or backed into a corner by design.
The cycle of escalation comes not from malice but from the blindness of a paradigm that doesn't account for feedback loops. And the most dangerous part? The ones raising the alarm about AI risk are often the same people building the very conditions that make those risks inevitable. You can't keep pulling the trigger and then acting surprised when the gun goes off.
youtube
AI Harm Incident
2025-07-29T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy6Wstd_6Y9SS78h1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx15K1cZowNuIyjfiR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOZ8-di15Nhx3Zkk54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7bdQaU177bWxdpB14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwlx12ure6Aq6lXXT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBl2t2haYv8AEYoct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQP1kaz1d8fTVVAal4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwuKG5OyDpCKFQWsxB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1r6Isf8897AJwM654AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBC5Qstgo3iB3dg7p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]