Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What many don't realize or accept is that if we lose the AI race, there is a sig…
ytc_UgxUNWoCj…
G
Jobs where people who know how to direct agentic ai systems will be for the huma…
ytc_Ugw2KCuyx…
G
I did autonomous vehicle research throughout the 2010s for what became Aurora, a…
ytc_Ugyh5Vn4I…
G
Blake: Hey Google, have you created a sentient AI?
Google: No that's not possibl…
ytc_UgxDivgnl…
G
People who use AI to "draw" just cant draw themselves, that's why they get so ma…
ytc_UgzqF13BV…
G
I think the answer should be option C. AI that can read owner's emotions. But on…
ytc_Ugwa9FlNg…
G
As someone who often states that I am out of my depth about several things both …
ytc_UgxUUSP0k…
G
There’s now even AI that “corrects” low res images. Any low res image on your we…
ytc_UgyyUicez…
Comment
Great post, Sabine. I agree. AI fails in its design, not in power. Humans build brains through an information / test / iterate / self-reprogram loop. That is, we ask myriad questions and learn. (Test, categorize, self-reprogram, ask new questions). We redesign our system in response to feedback. This is what children do to build their brains, and AI does not. Until it can "abstract" and self-reprogram, AI is stuck, and will never become sentient.
To be clear, as soon as the child's brain conceptually learns how to self-reprogram it asks questions. Thousands and thousands of questions. This is a key "development point". If the child cannot or does not reach this stage, its brain will remain stuck. Some questions are language and pattern based, that is, they use logic or IQ. But many other questions a child's brain "asks" are not language based, but sensory based. This is more EQ - emotional intelligence building, but not exclusively. AI is only logic based, lacking other information inputs, so this diminishes the scope of its intelligence.
The child's default, genetically preprogrammed thought pattern is "why". Information comes to the child's brain from the six senses and internal messaging. Ex. The language equivalent might be: "Why do I feel a certain way and fall over when I try to stand on my legs?" (This is proprioception.). The child repeatedly tests the sensory data cause and effect loop. The child learns how the body works, but also gains data on the effects of gravity and the mechanics of leverage and hinges...and so on.
More easy to understand is innate biological feedback...fear responses, pain responses, pleasure responses. In all cases, the normally developing child senses this information input, makes a memory of the experience, and then repeats...interacts...with the external world to test, to replicate, to iterate, and thus build a foundation of reliable information. This builds non-verbal "emotional intelligence" knowledge that accurately predicts the world without the use of IQ - the more abstract system of language, symbol, number symbol and logical patterning. AI has none of this vast data to underpin a predictive "map" of reality.
Basically, AI is still a calculator. An amazing extension of human creative thought to be sure, able often to reveal or generate patterns and information a specific human could not, because of individual limitations.
What I think would be a dangerous and astonishing advancement would be to build AI using nothing but multiple quantum computers, whose algorithms mimic the functions of the human brain. Ray Kurzweil style.
At that point, we better hold on tight, it will be a bumpy ride.
youtube
2025-12-18T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMjHWIA01rpZgx-cl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHP8GtsM-Qj9fVxmx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgylWl5i72EQtclrRO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHE7t6BrpPcAx_xeV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxe-tsM8RTC5MA2u794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxurnu5ucdeK1rH1F54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxo5yEqsFS9C1IJ7nx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNsmJXbNxd-X5blqZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtZOPPxrBmWUY2yk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-oxFqD6YB3bcBHDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]