Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it's a fucking tool! The amount of "artists" who go out and find random pictures…
ytc_Ugysqagrs…
G
Fun fact there was a group of AI trained to do work with vending machines and af…
ytc_Ugz7Qos74…
G
Holy shi this is the best sentence "If you do not know what you are doing with t…
ytc_UgyoI-AiV…
G
I think they’re rolling it out on videos now, but I’m not sure… I saw some AI-li…
ytc_Ugz313VOo…
G
All it needs is an airsoft turret attachment for the back then it can put that …
ytc_UgzsBLCXM…
G
These kids will still be unable to cope with reality...
The issue is school make…
ytc_UgzF5ZD8Z…
G
Now that I think about it, autonomous police robots could REALLY be bad during a…
ytr_Ugx9sYJgB…
G
Donesn't work to be honest. Some times it does but 90% of time it shows AI detec…
ytc_Ugzk8xt7_…
Comment
Here's what vexes me and I'm sure I misunderstand him. Penrose seems to say that what we call AI can't be conscious because what AI is doing is obviously computable, else it wouldn't work at all; it's running on computers, therefore it must be computable. Somewhere in there, I get the impression that his point that "They (i.e., AI) don't understand what they're doing and therefore can't be conscious and they're not even intelligent," comes from the idea that consciousness or intelligence can't come from a computation--it's a non-computable problem of the sort described by Goedel's theorems. I don't know how he knows this. I don't think we know where consciousness comes from or whether there's only one way to manifest it or many ways, or whether ultimately our own minds arise from algorithms our brains are running, or...what?
youtube
AI Moral Status
2025-05-16T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzthwna2IS3FD6X-J54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5GTAGzRVPZ0H5aN54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxrptvKqDVlLwN7NWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY5ABc24FjIFoTM3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBad-NGvWgds83o7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxFE72xe26qnFK_jx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzdn5ItP0YgfcN8hXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVXS69ZTyif65Q2Xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwm0WtLGRfFC7Gfv2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqlW7uU8ExXeYtV5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]