Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A day will come when resources cannot meet everyone’s use. I wonder if AI handic…
ytc_UgxwS4g84…
G
if you forge money you're arrested and thrown in prison as a criminal , but if …
ytc_Ugxg0UOP-…
G
Given the certainty that there will be another Carrington level event in our fut…
ytc_UgwP7IDyX…
G
Hi from the UK; back in the 70s, we were told that with the introduction of the …
ytc_UgzH2YyPl…
G
You’ve figured out the key lesson in an MBA degree. Buzzwords and hype. It’s all…
rdc_o8c99fs
G
@オススメV2 I can ask OpenAI a million things, it doesn't mean that it carries any r…
ytr_Ugxb-eYDI…
G
Ai needs to be trained on existing data.
You cannot create original ai artwork …
ytc_UgxJ6MKQJ…
G
Oh, get over yourselves. I watched a video of Seth Rogen on the Graham Norton sh…
ytc_UgyE9cRNt…
Comment
Asking what the difference is between a LLM and the way humans learn things is a great question! More and more, with learning algorythms growing in popularity, scientists and philosophers alike are asking themselves the same questions you are, and it's an interesting field to theorize on that will hopefully help us understand the inner workings of our own brain, which we still don't fully comprehend ourselves.
The important thing to remember is that we, as humans, have an extensively documented history of anthropomorphizing things in our surroundings. We often give human emotions to inanimate objects, and I think the way people perceive chatGPT is no different. It gives the illusion of emotion and understanding, but it is simply imitating such concepts from texts it has been trained on.
It's easy to claim that this is no different from the human brain, but this is a claim I think only someone fully versed in both the inner workings of the LLM AND the human brain can make, and to my knowledge no one has yet uncovered all the inner workings of our little thinkbox. So, while it's tempting to claim we've created a clone of human consciousness, it's much safer to assume we haven't and simply created a very convincing fascimile. It has no abstract imagination like humans do. It doesn't come up with its own solutions to problems, at least not in a way that a child would when faced with the trolley problem. ChatGPT is entirely reliant on its training library, and its answers are inextricably linked to that database.
If you want to see the limitations, you need only ask it questions that break the algorythm's systems like "what is the longest 5 letter word?" The answer will make you quickly realize that chatGPT is not (yet!) capable of conscious thought, that it is purely relying on the laws of probability to give you the answer most likely to be the one you seek, even if that answer makes no logical sense.
reddit
AI Jobs
1685873682.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jmtpc3l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_jmtr4rn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_jmvhp54","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_jmuhvc3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_jmuyrpq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})