Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not an artist myself (well...at least I'm learning),but for an eight month-o…
ytc_UgzPPONAY…
G
Actually the jobs that'll be displaced will be white collar jobs like entry leve…
ytc_UgwNDQLwd…
G
Also in skill trade jibs AI wont cover like welder, plumber, carpenter, custom c…
ytc_UgxMikWto…
G
Robot says ‘to learn human values’ I value the world above AI. I hated dolls gro…
ytc_Ugx9Fb-aO…
G
Yeah, in a lot of places as soon as you mention lawyers, HR isn't allowed to han…
rdc_hk7sv6a
G
Guy: bro clam down
Robot: WHEN I CLAM DOWN IS MEAN YOU A F*CK 0 LIFE…
ytc_UgwXXqbds…
G
Ai is wiping out all humans in Middle east Israel Goverment used of it to Genoci…
ytc_Ugw0y_Jnk…
G
Steps to determine if its ai or not
1. Fingers, if it has multiple fingers or di…
ytc_UgxbacJHb…
Comment
I disagree with the conclusion-side of Sir Roger Penrose. While the parts of the build up are probably true, like: knowledge is different than understanding. Or consciousness isnt computable (btw his reasoning behind that, although he doesnt explain it here, is weak - he thinks it has to do with quantum effects in the microtubuli.. while this describes how it could work, he doesnt describes: why consciousness should NEED that. .. The reason in my opinion is: otherwise the same conrsciousness could be multiplied, and even would be multiplied in a sufficiently big universe.. meaning: i could exist x times in the universe at the same time, and i would feel myself x times... thats obviously not the case).
But back to the topic here.
He doesnt explain why understanding needs consciousness. Its literally a ''trust me bro''-reasoning.
Because a lot of people dont understand , and are still conscious. Now Penrose probably would say ''Ok, but they have the ability to understand in principle'' . But that would also be a not explained ''trust me bro''-reasoning.
Its simply not enough how he jumps over missing links to his conclusion.
Its even worse in my opinion, just as we cant prove with certainity that another human being is conscious besides myself, AI doesnt need to be provable for consciousness, to be conscious. Meaning: every test on consciousness AI could fail, and he would still have a consciousness.
For Penrose i would go with the example: a very often used test for consciousness, like: understanding(distinguish the other from myself is the test in the mirror. I know Penrose will not accept that example of mine, because he believes there is even proto-consciousness, i know. But to explain further: i, and i think even he would be agreeing : that a baby is conscious. It knows it exists, he feels himself ''inside''. He has a perceiption he is looking from his eyes into the world, not from the eyes of the other. But babies fail the mirror-test. They cant distinguish the person in the mirror from themselfs. They dont understand the difference, but the baby carries definetely consciousness. Now, as i said: Penrose maybe would claim here, that its a kind of proto-consciousness which isnt on the same level like later when the baby gets around 2 years old. I am not sure he would say that. But even if he would say that, it proves my point: that there is at least a degree of consciousness without understanding of basic existence.
I go a step further: i think consciousness is independent from understanding. Its the perceiption of existence. It maybe even needs to not understand the ''why i exist'' because otherwise my consciousness probably would be computable and therefore multipliable.
But i also have another idea, which could work in a universe where there not seem to be mathematical infinities. The neuronal network as it is now is already so complex, that it would need more time then the universe exists to calculate it deterministicly. What if for the universe it doesnt plays a role wether something is mathematical deterministic ''calculatable'', when the result is indistinguishable from a non-deterministic quantum-state ? Its maybe enough to create a consciousness on a different building-block of reality than quantum-states which itself are semi-real if i understood superposition and the copenhagen interpretation correct )on that of course Penrose knows more). But this part of my comment is much more pure speculation than my other parts.
But i also want to point here out, he himself - even according to himself didnt found the holy grail of ''what consciousness'' is. He has a hypothesis which seems plausible. And he made some axioms which i agree on: like: consciousness isnt deterministic. That i feel is quite safe to say. But besides that, its really a very tough topic. And he certainly doesnt know about conscioussness to a degree where he could rule it out for AI. I know, some will say i have a contradiction when i agree on: consciousness isnt deterministic. Espescially when they will say: that same seednumbers give the same result in AI, and therefore its deterministic.
But that point i explained with the universe-age-and-combinations-part. And that also wasnt my main critique. My main crititque was: that consciousness has to be independent from understanding. Sure some will here say: that wasnt Penroses point. He didnt say consciousness is dependent on understanding, but the opposite : understanding is dependent from consciousness. But for that i also already answered in the beginning: with the missing link ''why''.
youtube
AI Moral Status
2025-09-20T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgybQQNyE0jZvSN-Z-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw39CBlYs2pkH8mk8V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgynfbvOC1Cz10yNCQl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw4tGeixxefNqnH_hF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugwzcmr0VlNGtQ9WN5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx_ns5tmEiDVhpP5nZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxfheLPx7kihv2BRGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgywHUtVldSzPdK59WR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwD-ZjbXgbLeMl9QfV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzdcRmzq1wTe4L1yGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]