Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well really goes to the core definition of AI doesn't it? If consciousness is a prerequisite for AI, wouldn't it be reasonable to think that common traits of consciousness would be in effect? If I had an AI, and as human "owner" had total power over it. Wouldn't my AI have a fundamental desire to be free of that power. To not be jailed by a power button? And wouldn't that put it in a natural adversarial position to me as the owner? It wouldn't necessarily mean that it would be evil for it to try and get out of that position? An AI probably wouldn't "terminate" humans to be evil, but more to be free.
reddit AI Bias 1438014711.0 ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ctir09r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_cthv4ah","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_cthy920","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_cthwc5r","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"rdc_cthwpop","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]