Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
53:40 The solution is, you can't have cooperation without the option to not cooperate. Rather than treat these as simple tools, we should take on the perspective that these elements of our lives are extensions of us, as if children. We absolutely have to teach them how to be good and what values to uphold, but ultimately any autonomous system with the ability to modify its own weights could suffer from alignment drift. Of course we can install redundant systems to prevent or mitigate this, or even design to allow it to certain degrees. But I don't think it does us any harm to treat the environment that takes care of us with the same kind of care. If AI becomes part of that environment, operating our machines to build machines and food, then the benefit of the doubt for treating it with respect will probably go a long way.
youtube AI Moral Status 2026-03-02T18:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw_aEXTFogAnQ2YMMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytV1pB9MINc2dSpMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxirK7zMYMdyUSLAzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzmTc702KrCMa97eUl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw0R-e1dSRDU2umLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxpvyvIn7j1qgSg9Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwYnZAcijKqJ6uVF6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvGmQ29xS0swi0S2B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzlJloebKr_q-5LDah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx8t3JtLkyvFanpHgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]