Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My ex is an AI researcher and he's currently leading a team to find out "how ai …
ytc_Ugxw4FBoX…
G
Chatgpt also gets destroyed on the topic of evolution, jimbob has an hr long vid…
ytc_UgyZcpeKP…
G
To say art "can't" anything is actually the opposite of what art exists for. Any…
ytc_UgwqHKqg1…
G
AI just make some things easy. But it is not everything for Human Solution. Why …
ytc_UgzYp15_r…
G
I was a translator and editor for 15 years, and slowly but surely I started losi…
ytc_Ugy6zbm_1…
G
Alright let me put this together for you. The ancient gods we know them as watch…
ytc_UgzpZdtnp…
G
"Ah, you know, he's just asking questions. He's just thinking deeply about how i…
ytc_Ugw08mh2o…
G
Yeah bro I've used AI in coding and it's absolutely brain damaged it will take a…
ytc_UgyipsuXX…
Comment
We have developed ML by mimicking what neurons and their connections do, when we virtualized enough neurons and fed an insane amount of data into this net suddenly these models are able to solve pretty complex problems, find creative solutions and reason about certain topics. This is called emergence, it's what our bodies and brains effectively did as well, a lot of simple things in a system suddenly, for some reason not super clear to us, complex behaviours emerge from the system and it is able to do more than its parts can individually.
ML is built by mimicking what we learned in nature, we are actually not entirely sure why it works so well, but it does. I would argue these systems are absolutely heading towards sentience. Recently people have been experimenting with the "agent pattern" where multiple MLs get a different "job" for a task and validate each others work according to their given job. Not very different from how each part of the brain has a specific purpose in daily life and together they make you.
I understand however why you're hesitant to call this "self-awareness", because it's not doing exactly what living things are doing. These models don't learn by themselves, or think. But instead they are a snapshot of intelligence. When these models were trained that's the moment they were learning and thinking, and we're just talking with the result.
From a business perspective it's not interesting for an LLM to keep learning, to think by itself in the background, because we lose control over the conclusions it may draw and people with ill-intent may teach it the wrong things. It's not impossible however, and given that, I feel it's at least fair to start calling these model intelligent.
reddit
AI Governance
1708171131.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kqsv5u7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_kqtc9dz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtr91j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kqvmuk9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kqvpryv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]