Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans have *wants and needs* that drive behavior. The machine learning that's been rebranded as "AI" doesn't experience, think, or feel anything. Self-awareness is meaningless here, the point is that these AIs are just complex search engines with some add-ons, they're not driving themselves. If some evil billionaire wanted to design one with the explicit goal of seizing power or whatever, there's still a person behind it, it's just a weapon at that point. When AI writes a sentence, it's not translating a thought into words, it's making a statistical guess about what word should come next based on all the other times it's seen a sequence of words. You can show a human a novel object for a few seconds and then remove it from view, and they could draw it flipped over or rotated because they built a mental model of it immediately. You'd have to show an AI tens of thousands of images of that object in different lighting, different environments, occluded, etc for it to be able to do anything remotely close. LLMs are not particularly intelligent, they just follow a ton of human-written rules to link information in a massive database. We may have GAI someday, but it certainly won't be built on LLMs. Anyone telling you otherwise is trying to sell you something.
reddit AI Moral Status 1746999848.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mrtdo2e","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mrrnvgt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"rdc_mrrl4kl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mrroass","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_mrrn8h6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]