Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wait what? I didn't think GPT could do stuff like that? It doesn't actually have anything in mind as it cannot think, right? And does that mean that GPT's answers are random up until the correct guess? Edit: I played around a bit. I think I was mostly right, but it seems like it likes to wait till the end with the yes. https://chat.openai.com/share/2c51c099-e0a3-4381-87fb-cc6b120ceac2
reddit AI Moral Status 1705891040.0 ♥ 37
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kj398rd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_kj1mk55","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_kj0xqci","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_kj1tynv","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_kizhb71","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]