Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not that nobody *knows* what they are, it's that nobody *agrees* what they are. Language works because words have agreed-upon meanings. When there isn't an agreed-upon meaning language fails in its role to communicate ideas between people. Person 1 can say "Chat GPT isn't even AI," Person 2 can say "ChatGPT is obviously AI, but it obviously isn't AGI," Person 3 can say "ChatGPT is AGI, but it isn't ASI," and Person 4 can say "ChatGPT is ASI." Hell, I've seen all four of those takes written in this sub. People are saying that because they mean different things by those terms. What meets the definition of AGI for Person 2 doesn't meet the definition of Person 3. So is it AGI? Depends who you ask and what they mean by AGI.
reddit AI Moral Status 1752778911.0 ♥ 25
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n3ov51u","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3r8m4z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n3ol68p","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n3ow8xk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3q6b3q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]