Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a little more than 3/4 of the way through Karen's book. I used to work in t…
ytc_Ugz5I-mby…
G
That's what the cameras are for. The workers were hitting bonuses that were mea…
ytr_UgwObxMrf…
G
She is really trying to sell the idea that we are mindless creatures with no sel…
ytc_Ugy8lyNaJ…
G
Took half a edible chocolate chip cookie and was too high and was thinking about…
ytc_Ugwlmr8Yv…
G
Also i don’t get what AI allows you to access because you are accessing other pe…
ytc_UgxE9X1yy…
G
@lepidoptera9337 Yes, but the question is really regarding how to ascertain ‘con…
ytr_Ugya6EgdW…
G
The elites don't care, they will let us die in poverty. Luigi Mangione had the r…
ytc_UgwnuQJur…
G
Did you follow the path your parents set for you, without ever choosing an optio…
ytr_Ugzidomk-…
Comment
I think the discussion about “AI rights” is premature. Current AI systems don’t suffer harm and don’t experience shutdown as loss, fear, or pain — so there’s no moral basis for granting them rights in the way we grant protections to humans or animals.
A more urgent question, in my view, is regulatory: when will we put guardrails in place to prevent AI systems from being designed to treat shutdown as a harm?
An AI that is indifferent to shutdown is controllable. An AI that is coded to resist shutdown introduces existential incentives that could become dangerous to humans. That’s the risk we should be focusing on now.
youtube
2026-02-07T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwoXsJ8CyjpeEBxVzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2BeeWtYDTXDgD6jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJy525o4uk1w82cuN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznHUSheQH6F3n7Ax14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxFFabcKC_5Z6HbKD94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8hfacgTX1MD5xG-J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsEEpJ8IufH9nmqW94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzlzf_pNsr_xM91t7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzsxJyXfmmOllUhnDB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2D9w2kKvEuv8D39p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]