Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That a second guy with long hair is an AI acting like a human omg. They are here…
ytc_UgyOfPy0e…
G
Because BP is a diversified company that does a lot more than just selling crude…
rdc_czlm583
G
Ask the AI to draw 7 red lines. One green and one in invisible ink all perpendic…
ytc_Ugwxo4zpk…
G
Whether you like it or not chaina is number one in technology and artificial int…
ytc_UgwK-3BY3…
G
>It's serious this time, we simply cannot allow this to be taken from us or u…
rdc_kif87ys
G
Mabey this robot is just right. It takes in a massively greater amount of inform…
ytc_UgyYCDO_K…
G
If AI pay tax, does AI has right to vote US president!
So, maybe only mass AI's …
ytc_Ugx8WCoC4…
G
Replace them with AI it won't make much difference anyway... It is these writers…
ytc_UgzFZysj9…
Comment
13:00 Re: AI reasoning. The problem I have with this is that with humans, **reason is an afterthought**. There's a series of studies on people who had split brain which is fascinating in and of itself. This could be it's own topic of discussion, but to be brief, these people's right hand literally didn't know what the left was doing if the right eye couldn't see it. It's weird. But it gets even weirder. Because the researchers would show a pictures to just the left eye, tell them to pick up the thing they'd see, the left hand would grab it and hand it to the right hand, and the right hand, which was the only side that was hooked up to the power of speech (honestly, this is a rabbit hole I'm trying not to go down), they'd ask them why they picked that item and they gave a reason. But it was a stupid reason like "Oh, I had one of these when I was a kid and I was just thinking about it" or something dumb like that. It's dumb because the real reason was that they were told to pick it up, and they did. They should have said "Well, you told me to" but they didn't know that they were told to because it was told to the other half of their brain.
The point is, how much of the "reason" for what *we* do isn't the reason we give? We do things every day just because we've got a habit with no decision, and then afterwards we treat it like there was a choice before hand. **Reason is an afterthought**.
So if we're asking AI to articulate it's reason as it's doing thing, not only is that not the way we do it, but it might not be the way it's doing it. The reason is that the weighted nodes in it's decision tree lit up, but it doesn't know why. So it's gonna do what we do an just make up a reason. It's just as useless as we are at explaining what's going on in our own head.
We already know that AI will always generate an answer, even if that answer is complete nonsense. If we can't trust them any other time, why do we trust their reason. Especially when we can't trust our own.
youtube
AI Moral Status
2025-11-01T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxpz7mBcwu2pU7krIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwskrXu9Gvv0qJUaKt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvyL5MNpNoJn58MNR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9zcGzFMuHbitH_KR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw7TnR-xOvM4ryZ4514AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpYHXahpQ6MMLDHxB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIOD3BIRkx6ODHoSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxi7AFVI3Mslat1Z954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyuwUqi288wQwjicY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx520150TNoWIH6Wqh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]