Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Asking what the difference is between a LLM and the way humans learn things is a great question! More and more, with learning algorythms growing in popularity, scientists and philosophers alike are asking themselves the same questions you are, and it's an interesting field to theorize on that will hopefully help us understand the inner workings of our own brain, which we still don't fully comprehend ourselves. The important thing to remember is that we, as humans, have an extensively documented history of anthropomorphizing things in our surroundings. We often give human emotions to inanimate objects, and I think the way people perceive chatGPT is no different. It gives the illusion of emotion and understanding, but it is simply imitating such concepts from texts it has been trained on. It's easy to claim that this is no different from the human brain, but this is a claim I think only someone fully versed in both the inner workings of the LLM AND the human brain can make, and to my knowledge no one has yet uncovered all the inner workings of our little thinkbox. So, while it's tempting to claim we've created a clone of human consciousness, it's much safer to assume we haven't and simply created a very convincing fascimile. It has no abstract imagination like humans do. It doesn't come up with its own solutions to problems, at least not in a way that a child would when faced with the trolley problem. ChatGPT is entirely reliant on its training library, and its answers are inextricably linked to that database. If you want to see the limitations, you need only ask it questions that break the algorythm's systems like "what is the longest 5 letter word?" The answer will make you quickly realize that chatGPT is not (yet!) capable of conscious thought, that it is purely relying on the laws of probability to give you the answer most likely to be the one you seek, even if that answer makes no logical sense.
reddit AI Jobs 1685873682.0 ♥ 5
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jmtpc3l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmtr4rn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmvhp54","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmuhvc3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_jmuyrpq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})