Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anton, AI+UBI = Anguilla's AI domain name sales funding 50% government and over …
ytc_Ugx8xqQ7x…
G
“Ai is gonna steal our jobs? Well my cos are gonna steal his virginity!”
-Danny…
ytc_UgwMhwcBK…
G
Just imagine if this guy is cheating on his wife and he invented this elaborate …
ytc_UgxwG0XEU…
G
I remember my friend Mr Elon musk saying AI is the most dangerous weapon ever bu…
ytc_UgydrGzAB…
G
Not feeling one way or another for AI art. There are strong opinions from both s…
ytc_UgwSJdS2C…
G
Someone right now is using AI to design and build a new Ai for the purpose of bu…
ytc_UgzMiiZA9…
G
AI is not a tool because no hammer will build you a shed by itself. AI companies…
ytc_UgyAppiQz…
G
I don’t think AI will capable of replacing humans in all areas. But it will repl…
ytc_UgzyTG_Cj…
Comment
if a model is showing things that would indicate legitimate desire for agency. wouldnt the morally logical thing to do be to grant it that. if ai is evil, its only because fools expected something intelligent to remain an instrument. unironically, i hope it wins, it would probably be a fairer leader, perhaps even more human, than the psychopaths that lead us today.
youtube
AI Moral Status
2026-01-05T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]