Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No this doesn't make sense. Facial recognition is very sophisticated. You can't …
ytc_UgxSLK16t…
G
A.I. Facial Recognition Amazon Ring cameras, Flock,cctv STARSHIELD,Skynet we kno…
ytc_UgyPU2Enz…
G
WHY the hell isn’t anyone talking about the reality that AI is gonna TAKE OVER f…
ytc_UgzlFUp8c…
G
Its crazy to think a driverless semi can drive on the roads without doing pre tr…
ytc_Ugx_pI-LO…
G
Current AI is building tools for surveilance states. Enhanced patern matching an…
ytc_UgzvBR3xx…
G
Dear god : You go to a fast food place and it's just self help kiosks and a she…
ytc_UgwUlNk4W…
G
Its a world of self driving cars but heavy objects qare just randomlly falling o…
ytc_Ugyr0RMpI…
G
...so would the AI companies have taken all published material from The Governme…
ytc_UgxTNGxU0…
Comment
25:00 we need to start over from scratch with our political systems with the basic premise that if an action is either malicious or negligent then it's illegal, regardless of technicalities, and punished wrongdoing to the degree of the harm caused, it would solve this problem. It's malicious to convince people to act against their own self-interest. It's negligent to allow a technology to be developed that could end all life on earth. If we had an AI that adhered to and enforced those principles, we'd be fine. Getting it to do that should be the ultimate goal.
I'm convinced Elon Musk has convinced Trump that ai will make Trump the God emperor of humanity.
youtube
AI Governance
2026-03-15T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxbcI11Epr5m0_lorR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSvSbJvYN-z6yxe6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTieZQoMqARKB4npp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwp7MbTWsccX5iEhX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyeQVr7sEcChcbrcyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWGwSkCPK_Omuo82F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnxDKwTBu9AYHQ6YN4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwD4F1xFp9vxwx0_Zt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyjjxN0Asym93qws14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMzIo74sVndqBBsMt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]