Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the problem with using consciousness as a bar is that we understand very little about where and when it is possible, what forms it can take, and what those forms are like for the perceiver. philosophically, there are a great many more questions than we can't answer with modern science, and we likely won't have those answers for some time. we understand human consciousness better than most... but even that is an early science. Reaching beyond that is so much harder - for example, in what circumstances can something look like/act like consciousness, but not have "someone home" - machines with complex algorithms, for example. Having an artificial neural net pass our best turing test doesn't fundamentally answer what we want to know: is it conscious? Can it have the patterns of consciousness without being "present"? Given the spectrum of different levels of consciousness present in the rest of the animals in the tree of life, rating consciousnesses is very hard to do. While it would be a logical way to rank things, it seems very impractical from an ethical standpoint, due to all of the unknowns.
reddit AI Moral Status 1483327179.0 ♥ 6
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fn5rbjd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_fn5q66a","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"resignation"}, {"id":"rdc_dbw2i4y","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_dbvn2up","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"rdc_dbvxhz3","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]