Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This error happens when people buy into the overhyped aspects of current generation LLM-AI. At this present moment, publicly available AI is akin to a highly complex, statistical frequency probabilistic algorithm that produces generative text. This is why it can hallucinate and improves it accuracy and reliability with Reinforcement Learning by Human Feedback. It doesn't know when it is wrong because it is merely transforming and generating text it has been data-trained on. Novel, never-been-connected concepts are challenging for AI because it relies on datasets to be trained on -- at least for LLM types. If something has nevever been published or and it was not available for training before the cutoff date to lock the training dataset, the LLM-AI will not have that information. AI does well in fixed, large but finite calculable outcomes such as chess. It can rate the strength of a chess move and predict % outcomes and advantages based on the pieces on the board. AI is also valuable in weaponized forms from software to aviation to missile and various combat systems based on its high quality pattern recognition and ability to react much faster humans.
reddit AI Responsibility 1734413480.0 ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m2g89dw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_m2esap8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_m2dmurk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_m2fa1ui","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_m2gbqf5","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]