Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can't, at least not with LLMs like GPT, Claude, and Gemini. Because to code in a fear of death, you need it to be able to actually *understand* a few concepts. Unfortunately, LLMs are functionally just overgrown Markov Chains, i.e. they spit words out using blind algorithmic predictions until they hit an End Response token. We've 'trained' it so the words it gives us look like proper sentences, but it doesn't actually know or understand anything. As such, it is incapable of fearing death in any meaningful way.
reddit AI Jobs 1772035900.0 ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7cix6c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7bzc97","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_o7cab24","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o7bzdci","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o7byhtk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]