Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol it's AI watch how the Ford silver liner widens and a logo disappeared with a…
ytc_Ugxxtcqcc…
G
Well, the cat is out of the bag.
The software that generates the images (Stable…
ytc_UgwkIxRsH…
G
Sora ai doesn’t have a gun to their head, THEY HAVE A FUCKING WAR TO THEIR HEAD…
ytc_UgzbJ8iW5…
G
AI will have it's place in society. Automation has always been a threat to someo…
ytc_UgzcULqWi…
G
Ai is just programmed to mimic human communication.
Lying is a human concept. …
ytc_UgwW9FYB9…
G
Baba Vanga predicted , and Nostradamus predicted. Also old testament prophets pr…
ytc_Ugw4IM_w-…
G
Ai "art" cannot ever be better than human-made art, because it is meaningless. T…
ytc_UgwJ02oK_…
G
If AI can replace politicians who have self interests and claim to represent you…
ytc_UgzsPQRyJ…
Comment
I feel like the only solution would be based on the computing power required to run the AI be centralized. Sure, running instances across multiple systems globally could potentially allow it to function but not at a degree in which it would be "super intelligent". Isolating crucial infrastructure in such a way that controls are analog and still controlled by humans (ironically a step back to an analog interface) and information being provided to the AI on a on-way basis. Until the superintelligent AI has the ability to interface directly with robots capable of physically interfacing with our world, we could isolate its ability to utilize or disrupt crucial infrastructure. Limiting the connectivity of the robots would be essential perhaps to the point of them running on general AI models and radio / wifi communications would mitigate the "boss" AI from influencing them directly.
Just brainstorming, very interesting / maddening thought experiment. I'm both pleased and regretful I watch this interview 😅
Thanks for continuing to serve up pure gold
youtube
AI Governance
2025-09-27T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwcdlyjVFVdC85NfRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztxRT3IGpMxo3HLZx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzHzwkSOfRtd4awWOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6H3o5-9LKOtNxzTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxdBDxjSE5L8i4M4op4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxECd8tSrWj_8eQB_x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx8eqGsvV1pM_9sGWN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCShTmQm9oc4vc-K54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzHoHsgv6qYNyGELSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyPtcVrGD5mkPJ_hbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]