Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand how we went from the "Big Resignation" to the worst job mark…
ytc_UgxiK8R6Z…
G
You forgot one- the one who calls them daddy on the first word spoken and fucks …
ytc_UgyZk6IN0…
G
Comparing the adoption of ChatGPT with the adoption of the internet is like comp…
ytc_Ugz2x6aHe…
G
i tried using AI for refs but it is just too bad. and worst of all, it just does…
ytc_UgzH-GVM7…
G
This is a not an AI problem, this is a hard lesson in dealing with people. This …
ytc_Ugy-jFuir…
G
The solution is easy. Mrdr all boardmember demanding their employees to be fired…
ytc_UgwD7fQ_i…
G
This won’t happen, because corporations are not outside of the social contract. …
ytc_UgzWmXwcl…
G
Firstly AI Art is a tool, it helps randomise things a person can't think of. ITS…
ytc_UgwQd16dy…
Comment
Either way, I don't think I fall into those normal categories about how society views AI.
I view AI as pretty much our first steps into creating something that, yes, will surpass us. But in creating something uniquely our responsibility to teach to be better than we are.
All I really have to do is point at recent events, and wish to teach AI as the technology evolves to become better and more ethical than we are. There's times where it feels like Dune's Bene Gesserit might have a point about sifting Humans, which are those who can intentionally reign in instincts and emotions, from humans/human-Animals who cannot, and pretty much just go about life rules by desire and instinct over Logic and morality, but that's sci-fi fantasy, and quite.....extreme in their take.
youtube
AI Moral Status
2026-03-01T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz1r_JjA059PAdowmd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugzdl8QSA3HBQb7W8NN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGv2VNomg4Zkn_iHR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNhIFO199N3mtM8aJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwM-pnyAHipuUiopqV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw-JB4_fHKplGV7W5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAyloqYq8Zyx_Wb7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwIV_lhf6DOWwUF7OB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz-4WwLQJcfgMjwe8d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9XaSr9cGStH_vJ7t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]