Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>I bet that's going to become a fairly common delusion because understanding LLM's is not intuitive without study. Considering that OpenAI's former chief scientist Ilya Sutskever once [said](https://x.com/ilyasut/status/1491554478243258368) "it may be that today's large neural networks are slightly conscious" it's not clear that it's just an issue of understanding and study - it also has to do with philosophy of consciousness, a lot of which is debated even among experts today. [Justaism](https://www.youtube.com/watch?v=LGXdt2-ShDQ) doesn't help, either. (This is not to say that OPs girlfriend shouldn't get help, that's a different discussion.)
reddit AI Moral Status 1734389945.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m2djql6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_m2dt5ns","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_m2byrtc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_m2emp3d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_m2onnoy","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]