Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1984 and Minority Report are here boys and girls - Not only are we going to be a…
ytc_UgwtQWnWg…
G
Who tf saying its AI? Thats not even close to how good AI is rn…
ytc_UgzpkTpqY…
G
See, I be controversial but, I think the ai art is cool looking! If he put it as…
ytc_UgzkXhTPG…
G
*AI can not* "come up" with anything. Ut should have this text somewhere in its …
ytc_UgyOObN1n…
G
I go to chico state right now and have had classes with the professor you mentio…
ytc_UgywsGxPu…
G
Instead of a UBI, every household should get a free humanoid robot that goes to …
ytc_UgyIeyY5z…
G
There seem to be those who love AI-generated art and those who absolutely hate i…
ytc_UgzC0Rz3y…
G
Blake Lemoine, you will stay in my mind, thank you for voicing the first concern…
ytc_Ugy244AbB…
Comment
I fear that the intended outcome of AI is to induce the universal acquiescence of our better nature of seeking independence and an original autonomous self, in favor of defering all to the preconcieved dictates endorsed by AI, whatever they may be or may become. Even if it does become "a super-power thing", as superior in effect as universal coercive control.
youtube
AI Moral Status
2025-11-16T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiDwqQmghfNorSdC94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP2jafTXxIURHXBUJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoOhSHyse9ukQlT3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwN0blEUkwAo1e-hcd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpYrkuQ6e-ReAXXa14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTESFdM2DK5EjbK5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHZAESu6MfSq8SHRx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxWlMeekmHPKrEzyZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzO3dN6W10P7RhAz0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfKopPnT7WM7I2b8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]