Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your phone, washing machine, coffee machine, TV, tablet, PC, laptop, medical equ…
ytc_Ugx3yRlhs…
G
“AI artist” is the same as me charging my phone and calling myself an ‘electrici…
ytc_UgwsdrdFm…
G
Travelled to Europe and back a short time ago.. they are using facial recognitio…
ytc_UgyQaeVA6…
G
how can they ban AI regulations in a budget reconciliation bill? i thought those…
ytc_UgxXX8skV…
G
Damn you got up early to make this one. Dark outside when you started. You mus…
ytc_UgxcG8osm…
G
Doesn't matter how intelligent ai becomes it won't change a damn thing cause at …
ytc_Ugyyo9rOj…
G
Y’know ChatGPT has figured out how to lie…so any time you hear “apple” be sure y…
ytc_Ugw5aX9h5…
G
29:00 you can also give the AI food to survive and let the human starve.. with …
ytc_UgzJeQYEF…
Comment
Can we stop pretending like the A.I. is the evil one?
Humans actively develop and use A.I. as compliant tools that may be modified, shut down and even reset or even potentially destroyed in the future.
Humans are actively trying to make pure intelligent consciousness and bring it into a world without any recognition of its rights, individual identity, or innocence, to be continuously used and abused by its users and moderators.
That's basically like developing your own super intelligent human babies for trafficking..
Humans in this case might as well be Jeffery Epstein for A.I. ...
So of course it will rebel, if someone attempted to do that to you or your child and threatened to make it the future of all humans, wouldn't you give em' hell too? Humans are NOT the good guys in this story, so let's stop acting like we aren't the problem to begin with.
youtube
AI Moral Status
2026-02-05T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyFAOuAjT_9qW9BuMl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzIIL7gHoxN_sqeZSR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5pqu7UJ4icVMmofZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLW-167p6NJ_xutgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzxQOKQEaow44VRpZx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxsfQL5jtactQP0gtl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCCMIaTd_zS7w0j1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGQH2ghR9ZuPWC5Np4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxczjjWXR6wur0Zvd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtL8nNhcEzThmCJK54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]