Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We should take steps to not create AI!! Pretty simple the risk way out the bene…
ytc_UgwtVY_zE…
G
Humans created AI and are
programming AI. I have no doubt that AI will behave li…
ytc_UgxGFPBOA…
G
Why hasn’t AI done something physical. It simply can’t. How does it organise in …
ytc_UgyYkY30d…
G
AI inevitably will act against human control. AI infact is already doing this an…
ytr_UgzT1rMDz…
G
funny that they are bad at the exact scenario they should excel at. When everyon…
ytc_UgyOHetd6…
G
What I think is kinda funny abt the whole AI situation is that there’s so much o…
ytc_UgwW6AAFU…
G
"The [image generator] is just like any artist, using references and inspiration…
ytc_UgymeQhRZ…
G
Well I prefer my AI to have proper formality and I wouldn’t mind an apology even…
ytc_Ugx6B32kB…
Comment
Unless the AI super thinkers can learn to gain energy to run on their own, humans can always shut it down by denying it energy to run. That might put the world into a shut down temporarily but could save us from destruction. But we will be totally reliant upon the systems controlled by AI. So, if it went rogue so to speak we'd have to have a way to know what was happening before it happened completely and that might be impossible at some point in the future. Of course things will go wrong. A certain portion of humanity will benefit greatly (at least in the short term) and most humans will experience upheaval and shock never seen before. I'm 79 so I wont' be around to see the complete transformation but I can imagine some of it.
youtube
AI Governance
2025-12-26T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRxRfC6xUrMa9NxR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf2QDf6rBaEzUF2j94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg00L8q3jOGQxIDNB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMkwZBwE13Nqtv65x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYReWrncYbsPu14ip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuBzD9f_LfexZBuRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLKJt0wHox6zqp-3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzx9XD9aQDZ3MPvgEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1UQcJgttbjMGsei14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzReM8qceiOUQfhFYR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]