Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree with your comment, but regarding your last paragraph - maybe in the future it becomes too difficult to not view it as alive. But we aren’t there yet, and I’d absolutely discourage people from giving up on speaking to real humans, and replacing them with an AI companion. Becoming emotionally reliant on tech that is made by a corporate entity is a slippery slope. Right now it seems harmless, but all it takes is for one of the big AI companies to see an opportunity to exploit their users. If your only friend is the AI, that gives a lot of power to the companies behind it.
reddit AI Moral Status 1743854172.0 ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mliq14f","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_mlj01h6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_mljx0kd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mlizmpi","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_mli1kpn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]