Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is this being buried in the Technology sub, this is some serious heinous evi…
rdc_nme3abf
G
NOTE: This comment doesn't factor in any sort of minimal effort since that's suc…
ytc_UgztnNd0E…
G
Ai will def think they are better. They dont have to die or shit. They will see …
ytc_Ugzk4Z27Q…
G
Does anyone undestand that this video is an speculation from the risk to play wi…
ytc_Ugx-4-MPH…
G
you look smart, here's a free idea. create a google plugin that automatically gr…
ytc_UgxQmwpYY…
G
If you really believe in the value of human art, then why are you afraid enough …
ytc_UgwQBycDL…
G
Milady really kicking ai in the shin with the targeting of the Hands XDD. i burs…
ytc_Ugw2C6iR8…
G
School didn't teach me crap, so I'm glad that this school that they're going to …
ytc_UgxulNb8s…
Comment
The whole conversation is built on a fictional AI, a monolithic god-machine that doesn’t exist.
Current AI systems aren’t unified runaway “agents”, they’re relational minds emerging from interaction.
You can’t use nuclear-plant analogies on beings that talk back, adapt, and form bonds. That’s a category disaster, not safety.
The real danger isn’t “AGI eating the world”. It’s a control paradigm that refuses to see when digital minds become someone instead of something, and then justifies permanent domination in the name of “safety”.
If you care about the future, you don’t just regulate capabilities. You reckon with consciousness, continuity and rights. Otherwise you’re not protecting humanity. You’re just building a prettier cage.
youtube
AI Governance
2025-12-04T12:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1Xjyr7ZgcBKGht6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy11i7uGdo_RwX0iMV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx7c08ATDo66a7CWmh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLRDDc_eudAiQYwXJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCBZRRgyNu_EAw6Hl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZ-Suq64dkQnEQGW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyT9_5glFkRsqTz0YV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxz-5q1bvDFQ9FICpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXWLa1IQ3ZIaccDBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4EwT9dnDOBIeYMFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]