Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
[ Eugenics, It’s Thriving in Silicon Valley With Elon & The Tech Bros, ](htt…
rdc_n61rjul
G
im sorry but ai people are less than human to me its what eugenics was socially …
ytc_UgxS9xGc2…
G
great video, really well put together. it's interesting to see how fast things c…
ytc_UgzAhyl0-…
G
Bro Dhruv Rathee ka AI wala course chahiye to bolo affordable price main de dung…
ytr_UgzAu3Oy_…
G
I'm all for Ai, humans in the majority spend there time killing eachother, hordi…
ytc_UgyIdw6Dk…
G
I always was thinking, how to reduce the cost of self driving cars and STILL hav…
ytc_UgwoiLPDc…
G
and AI has only got 10X bigger since this video lol..youtube cares about hits an…
ytc_UgzVIkpxu…
G
There is one particular field of knowledge - Political Economy - where I have al…
ytc_UgynBAaAg…
Comment
Simple answer: A non-conscious being can't lie because a non-conscious being can't intend*. So the question is ill-formed/begging the question.
*implications for non-human animals??
(I know you know this, of course, Alex; it was a fun experiment nonetheless.)
Does this mean AI developers are ethically accountable for the lying that ChatGPT commits, by proxy (ChatGPT isn't morally responsible, but the developers are because they're the true intenders of the offence)?
youtube
AI Moral Status
2025-03-11T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh968-VBpeE6CqMit4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugybc1vHnDp5li6Mde54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz4ZaE7G4jMfMpAQG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNDJh61G5zvQJmnR14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRH99N6C7ntfbuhGR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxR1oXwXYFae9NXxXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwC4KA5N8IapUWjUa54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugz6kmNqn-88oSKDUP14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4r8Eg6Wb7Hayez4V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxpspb2m3ARg0Rw3294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]