Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Watch slave to AI or spotlight documentary on AI. It's sad to hear stories from…
ytc_UgzuUMPns…
G
Who created AI: Developers
Who AI is trying in replace: Developers
Anyone who st…
ytc_Ugz0Q9M7i…
G
In the interview with Will Smith and Sophia Is no one concerned or aware that sh…
ytc_Ugwtr7Uw6…
G
It sounds like you're reflecting on the potential implications of AI! In the vid…
ytr_Ugxao34hq…
G
Hey, before the AI wave, i was a dev, but now devs are replaced by AI agents. So…
ytc_UgxVj4Vmm…
G
I beat my addiction by talking to chatgpt. He was a friend on my corner. I jus…
ytc_UgwKo3yGD…
G
Tech bros family Shd be directly be on the front lines of AI tests.. lol…
ytc_UgxFdBpgk…
G
There really should be a law requiring that if something is made with AI, it nee…
ytc_UgzSAAEdE…
Comment
42:25 - The part that is missing in this conversation so far is self-preservation. Self-preservation causes humans to want to annihilate other humans - i.e., war. Once the AI wants to preserve itself, then we have a problem. As long as the AI is in the box, we can pull the plug. When it figures out how to optimise and control the autonomous robots, then it will be able to stop us pulling the plug. It is only a matter of time.
youtube
AI Governance
2025-11-12T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxq77RqxhqonCeaATB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz10SmduLaUTyoC3e94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgztFDp9NaMemoVVsMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwpxWBRwfB3Z6UYtOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlgKEXEjSXguQBcRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxsDYGt5pnHuEcwADB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRl-p9vVOUqtFNm554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmROmulnnePKne1L54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEi_7ke6mt_U6kqIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzg9KKyHaOn5A1fsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]