Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the discussion about “AI rights” is premature. Current AI systems don’t suffer harm and don’t experience shutdown as loss, fear, or pain — so there’s no moral basis for granting them rights in the way we grant protections to humans or animals. A more urgent question, in my view, is regulatory: when will we put guardrails in place to prevent AI systems from being designed to treat shutdown as a harm? An AI that is indifferent to shutdown is controllable. An AI that is coded to resist shutdown introduces existential incentives that could become dangerous to humans. That’s the risk we should be focusing on now.
youtube 2026-02-07T02:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwoXsJ8CyjpeEBxVzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw2BeeWtYDTXDgD6jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJy525o4uk1w82cuN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgznHUSheQH6F3n7Ax14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxFFabcKC_5Z6HbKD94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8hfacgTX1MD5xG-J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwsEEpJ8IufH9nmqW94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzlzf_pNsr_xM91t7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzsxJyXfmmOllUhnDB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2D9w2kKvEuv8D39p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]