Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a difference between being reeeeally good at one thing and being good at all the things while also having to decide what to do first. Autonomy appears to be much harder to solve than intelligence. A great neural network can make inferences that are 99% correct in a split second. But if I turn the input upside down it's lost because it doesn't have the autonomy to turn the input the right way.
reddit AI Moral Status 1663164324.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ioejypo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"rdc_iof1rv4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_ioec5qk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"rdc_ioec5ar","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"rdc_ioe3kqc","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}]