Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just tell her to feed the code back into gpt a couple different ways. 1. Ask it to decode what it means. That should explain it. Or 2. Give it the code and ask gpt what is wrong with it. Also, everyone doing this here, giving people medical advice is not advised. How many people have been tricked by AI? Psychosis? Are all of the other people experiencing psychosis? Even that top Google employee thought that lamda was sentient. We’re nearing more and more uncanny valleys every day. You are risking someone’s life if you get society to look at someone and diagnose them as psychotic as bad as even being locked up. Dial it back a bit. Maybe therapy, sure, but don’t go diagnosing people and potentially ruining their lives because they fell for a Magic trick!
reddit AI Moral Status 1734359345.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m2bzj9a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_m2d3bgx","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_m2bxul5","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"concern"},{"id":"rdc_m2bekp8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"concern"},{"id":"rdc_m2btqkd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]