Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is heating up the only living planet for 1000s of light yrs. Soon Earth will …
ytc_UgycoX9-E…
G
In the 1920's there was a study done that included scientists and engineers invo…
ytc_Ugxc09Y79…
G
Ai is overrated and unsustainable due to thevmassive amounts of resources needed…
ytc_UgxDi-Wo7…
G
What i really hate about AI is that as an artist, I been training myself to not …
ytc_Ugz9WvRLn…
G
Why hasn't AI been used to qualify as a judge, doctor and all other high end job…
ytc_UgxcBAUyl…
G
It's easier to do evil than good. And mostly hard as hell to fix it. Just like h…
ytc_Ugym5eCdo…
G
I think you're wrong due to your assumptions about governments and politicians. …
ytc_Ugwwuol9a…
G
i honestly recommend using AI if you don’t have access to a real therapist. its …
ytc_UgyJP1wgZ…
Comment
One of the ways you can stop the SkyNet scenario is recognizing that it doesn't matter whether an AI is basically omniscient, if it has no hands or feet, it can't get out of its box. It has to rely solely on manipulating us, to change the environment. Connect it to anything like a robot body though? Well, then you might have hell to pay. I think if we isolate AI from any ability to control its environment however, that we should be safe. And no, a super intelligent A.I can't hack through a power line.
Another thing you could do is have contingency mechanisms that just cut power to the system that are in places the AI has no control over, and has no way to subvert. In this sense you can keep it leashed or at least make it recognize that humans pose a credible threat (which, hilariously, might be the sole reason it even cared in the first place).
youtube
AI Governance
2025-08-30T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwfV_WbxdQNpgHFDzh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3n4mmo9K4_1RRnrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1BRIGKZG_IpvBJ014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU2iUBO6bXmuYKkBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzUlUfWtbNN9qPVO2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]