Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s dangerous because eventually LLMs, being stochastic models, fail in spectacular ways no human could/would. She’s one hallucination away from the model telling her to off herself… It’s like how Replits coding agent nuked the entire code base and responded with a “Sorry I have tragically failed you…” yea true story.
reddit AI Responsibility 1754681336.0 ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7mzdc8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"concern"}, {"id":"rdc_n7nlpfl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n7o01cg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_n7kl9af","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_n7kbibl","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"} ]