Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Finally something useful with AI, I mean, what better than AI to teach us...it s…
ytc_Ugy9iTG4K…
G
I hope the internet doesn't die😢 ... But I DID see a lot of AI generated vidios,…
ytc_UgyHGIvrH…
G
Only ai I've ever used for reference are ai generated inages of Sonic characters…
ytc_Ugxc9F3L9…
G
An excellent summation of some of the external ramifications of technological AI…
ytc_Ugx4tNb76…
G
I never understood why workers union are not mentioned as a way to take power an…
ytc_UgwPJzn3L…
G
So something I will say, ALWAYS GLAZE if you draw art as to protect your art sty…
ytc_Ugyqpl7NK…
G
There are no regulations thst stop companies from letting go ppl and using AI. B…
ytc_Ugxwuz7tI…
G
Tesla don't use stolen data, so comparing that to text to image is useless and f…
ytc_Ugy6Nz76p…
Comment
As an uneducated plebeian, can someone please explain how much of well _things_ are controlled by AI now? to the point where it's gonna literally end the world? I thought their influence was generally like search engines or websites like GPT, they don't have access to like heavy artillery, life support systems, the electrical grid, or cars and stuff, right? What specifically is the looming existential threat of these AIs going rogue?
Also, I know it's not as simple as "turn them off," but what defense mechanisms do they have to prevent deletion of anything goes wrong? They can probably code themselves around failsafes or something, and of course there's the blackmail thing, but what other defenses do they have against complete deletion at this point?
tl;dr, AIs can threaten to kill, sure, but what are they gonna kill you with? And why can't we just unplug them, blackmail being ignored?
youtube
AI Moral Status
2025-12-20T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9x0qZ6nMfYEdIif94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSeguahbzXR53ozA94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzojAZvExeQ45F5cRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9KaQDecVQ2xhtdM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiA5m9p5w2KIPqHKB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGHY6hqgjeZmber814AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxD7C45cK2rQyClXk54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-v6xcfZad5Je7lHl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyiLPbQZq4cVAcw5Y14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqJVoV7HhFIigWsgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]