Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting read. I'm having trouble finding the touch points between theory and practical reality. I guess I'm within "objection 1."  Ie, if I am an Ai or Ai developer and I never read this theory/framework or derive it independently... does it still affect me?  >every agent... must, on pain of self-contradiction, regard its own freedom and well-being as necessary conditions of its agency. So, if an agent (human *or* machine) doesn't value it's own agency... I guess it isn't an agent within this framework and there is no alignment problem within the framework either. But...  In current ai-engineering terms, agency is just a chain of functions. An Ai program that can query itself, write and run programs, use an email client, credit card, computers or other resources in an open ended manner.  Are these two usages of "agent" even related? The "alignment problem" is (eg) the fear that an Ai agent (say gpt)  can form it's own goals in contradiction to its owners. Say gpt-agent conspires to get Sam Altman arrested and OpenAI nationalized. This could happen within or without the frame work's definition of agency or alignment... so where does the framework intervene?
reddit AI Moral Status 1775138674.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_dub4tsj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_oea9sdo","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_odvwb70","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_odw69wp","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},{"id":"rdc_odwcad1","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}]