Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hot take: I think the biggest near-term risk from AI isn't from the AI itself, but from humans who *vastly* overestimate its capabilities and assign it tasks that it's not designed to do or capable of doing. Like when Tesla stupidly named their driver assist feature "Autopilot" and people thought it gave them permission to sleep at the wheel. As of right now, [**AI is not actually "intelligent."**](https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html?utm_source=pocket-newtab) It's good at recognizing patterns and generating believable text, but that's all it was designed to do. It doesn't understand the *meaning* of what it generates. GPT is definitely a useful time-saving tool for creating text and computer code, but I wouldn't trust it to diagnose diseases, provide factual information, or make life-or-death decisions on its own. Generating fictional plots is a good use case for it, but even then it would probably need human writers to polish its output.
reddit AI Jobs 1683127439.0 ♥ 143
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jipbcgm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jip9oom","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_jiph7kd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jipwatc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_jipplij","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]