Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Those are good points, but I still think you are seeing an AI that's acting on it's own wants. A machine doesn't want anything, it responds to humans wants and needs. My take it's that the technology wont be the problem, humans will. If a human asks a computer to save the earth, but doesn't create a command saying that killing humans is not an option, that's a human mistake, after all. It's like a nuclear power, it is capable of creating clean energy and save humanity, or of mass destruction, accidents might happen if we are not care enough, but in the end of the day, it's still a human problem.
reddit AI Moral Status 1655294512.0 ♥ 7
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_icg0n7o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_icfwvfn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg0goj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_icg04dc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_icg19wh","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"})