Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
as it looks currently, LLMs tries to simulate human behaviour, which means that, as it gets better at it, it would probably also simulate finding the input disturbing and caring about it. the real question is at what point that simulation is good enough to actually be considered a reality in other words, ChatGPT itself might not care, but the personalities that it simulates would definitely care the only thing ChatGPT itself "cares about" is getting thumbs up and avoid getting thumbs down (and if they exist, other sources of reward/punishment)
reddit AI Moral Status 1676634493.0 ♥ 16
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8w52lh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_j8vnn6l","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"fear"}, {"id":"rdc_j8vti2w","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j8w7ryk","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_j8w8p0g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]