Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI is scoring productivity from posture, it’s basically grading optics. Makes…
ytc_Ugx8oUfn0…
G
Watch I Robot only a few of them were armed yet they still almost took over the …
ytc_UgzRdds8g…
G
Left AI. A strong guess. I had to really look and zoom but the icons are a bit o…
rdc_oi10mc7
G
We are definitely programming them. Training a neural network can be opaque whic…
ytr_UgxwvVkz3…
G
AI is just cope; they're sending the jobs overseas and giving senior employees …
ytc_UgwS4m0ES…
G
First why is Volvo confused by large animals running into the roads, they have d…
ytc_UgwV3JHhw…
G
Not all schooling is equal.
This would be like saying Harvard is equal to some s…
ytr_UgzzKj37C…
G
NFTbros not a specific group. They're just a sub-set of the wider grifter popul…
ytr_UgwNwRk_Z…
Comment
Penrose is certainly a brilliant intellect, but Turing already had this figured out. Even more poignant I must say is that the question of whether or not computers are conscious doesn't have the importance that most here give it. The main thing to understand is that the behaviour of AI will have the ability to become indistinguishable from that of conscious beings. You'll never "know" if it it's conscious. It will just be a matter of belief, like qualia. People will go in circles arguing about it, but there will never be definitive "proof" one way or another. All we will have on the matter is assertions like the one Penrose is making here. Don't mistake it for proof.
youtube
AI Moral Status
2025-05-22T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaJbaZFJ-Iom2CUM14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRFIJ_4T8Jj4898KJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYAltCmknEOXgbt094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy7sebhEuUKswxmmN14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXWkJ5pGcH9yudETd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaQz4MtajLZYLgCA94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypnKTKdwQvdGhVmhp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwg2V4zU0tbP73O5u14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyPbzA2__hQz0tv2vB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNg6SzsinKGY7EgsB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]