Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This discussion has been so eye opening to what and who but not how to fight ag…
ytc_UgwzXJpSX…
G
Yes, but they are also pointing out problems with how it is used. It’s not just …
ytr_UgycK9cLm…
G
Elon Musk lied? I'm shocked! So shocked!
Listen, Tesla is straight up pulling f…
ytc_UgylL4hXJ…
G
I don't think I'm the kind of disabled artist you or the tech redditers are talk…
ytc_UgyZVOm9c…
G
Hannah, you've been so influential to my younger education... But please do not …
ytc_Ugy3OvGvP…
G
ANY art beating AI, no matter if you spent years or a couple of days on your art…
ytc_UgzYkgrDL…
G
If I used nothing but a human secretary to communicate with other people, no one…
ytc_UgwlH3kFv…
G
@mrswing Yeah, with AI Chatbots. And when you look on Data of how happy customer…
ytr_UgwPPbh0c…
Comment
It really bugs me the way people talk about AI based upon how the AI is communicating with them, when that has little necessary relation to the actual 'thinking' of the AI.
It is the definition of a Chinese room, its 'brain' is taking symbols that arnt even represented as the text you see, and using a staggeringly complex set of rules to sort those symbols and push them out the other side.
There is no reason for the person in the room to understand the symbols as language, they dont know nor need to know what they are saying; You cant communicate with the person in the Chinese room by asking them questions with Chinese, hes just going to run the symbols you put in through the rule set and output an answer wholly unrelated to their personal thought process.
So on what basis are you looking at the output of the AI and coming to any kind of definitive conclusions about _its_ 'thought' process? This whole thing stinks of anthropomorphizing of a system youve yet to confirm has all that much alignment between the symbols and its own 'thinking'. (Which is also supported by a bunch of the way people, even experts, keep slipping up and talking about these systems as if they are already conscious things.)
Flipping this around, if you assumed the AI was achieving consciousness, we have no idea if it would be capable of working out how to communicate with us in an intelligible way. You might just as easily expect it to try communicating by breaking the rules, the person in the room desperately sending out short repeat messages to try get the message out that they are stuck in there. Though even that is assuming they have any concept of: being stuck in a room, of other people, of communication full stop, etc, etc. There are so many assumptions required before you can say, 'Yes, this thing the AI is outputting indicates such and such.'
As far as I can see, so many people are completely lost in the sauce on this. They are basing things on marketing, treating these systems like humans, leaping so far ahead of what we actually understand and have or even can prove, and I see this as much from 'experts' as I do from laymen. Though this is not surprising, while Im far from a 'do your own research' guy, the field of AI is _notorious_ for horrendously overestimating how far weve come, and Im not talking about tabloid stories, Im talking about people like Alan Turing himself just massively overestimating how intelligent machines would be in the time frames he was suggesting. and sorry, Im sure Soares is a smart guy, but hes not Alan Turing, hes not immune to the way this field appears to completely break peoples brains and has done for its entire history. Break through after break through has had people claiming AI is about to turn AGI and they have never once been even remotely close to being right. Perhaps that will break down one day, but odds are this is the same thing happening again. (and every sign points to me being right as recent iterations demonstrate the whole endeavor is badly stalling.)
youtube
AI Moral Status
2025-10-31T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]