Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is that there's no way to ever know if it's experiencing qualia. How primitive of a lifeform is considered to be capable of suffering? Nobody will get mad at you for killing plankton, but rat experiments are being criticized. Once an AI passes this threshold in complexity, for all we know it might be suffering. [Stuff like this](https://www.reddit.com/r/Futurology/comments/lm88b1/evolvable_neural_units_that_can_mimic_the_brains/) makes the distinction between computers and biological brains even blurrier, maybe even a relatively simple reinforcement learner suffers in some form. We might have to be "safe than sorry" sooner than we think.
reddit AI Responsibility 1615665512.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi03fzo","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_gqsx5ta","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"rdc_gqxvyyx","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"rdc_gqtv6zv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_gqt3k8t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]