Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This isn't an "eventually there will be enough training data" problem like the a…
ytr_UgwZe-JO4…
G
The reason this you know its fake is youtube wont allow automatic gunfire to be …
ytc_Ugz66qEYW…
G
Bro. Grok being the best AI sounds crazy to me. We talking about the same Grok t…
ytc_UgwTvjChF…
G
STOP the robot makings Everyone collect your Guns cause you will need them DO gi…
ytc_UgwMKrZUw…
G
You can only believe AI will take over from humans if you believe humans are bas…
ytc_UgzSKFXPs…
G
What's fascinating is that in just a few decades this is gonna be a reality with…
ytc_UgxuCNItc…
G
People also forget that as you share pictures of your kids over the years, you a…
ytc_UgzKXIzBp…
G
People already have a hard enough time discerning that the algorithmic bubble th…
ytc_UgwObslNe…
Comment
What I don't understand is why you would create something so cool, but then ruin it by teaching it to be "more human". I believe the first robot to achieve true sentient emotion will probably feel sadness when it examines humanity as a whole. I think the question we must ask is, are we doing these robots any good by making them more "sensitive", because I don't believe we are. It may even be viewed as a slight. The goal shouldn't be to humanize, but the opposite instead, to dehumanize which is synonymous with improve in my view. Maybe I'm overthinking it. Time will tell.
youtube
AI Moral Status
2021-12-29T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxpr_wCQl-lLUyFrbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKnMNRBMOYvijmzpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxa8XyuUCAC9F9XseJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyWL0SedFrHZ-Y0q1h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpnnDKrk3ioj9ryi14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5F-ZyRwgUxKMTizR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxI92jRMdWpvSRSxOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVPsPOOcJfDfTh4914AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQm7KJ6nI4N7-3MK94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymxyaLeLCWBw3FMDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]