Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is not an automatic predictor of text. It is superior than that, somewhat sma…
ytc_UgwhkKosG…
G
@Lemonaid-Volts, I'm not a pro-ai (though I don't root for artists here either)…
ytr_UgyYf6Uqj…
G
ChatGPT has been twisting responses 'for the right reasons' since it was first r…
rdc_l5wr1bp
G
A tuning or having a robot follow is going to train the robot. And allow aurora…
ytc_UgxG_IDWi…
G
What is the real dark side besides what is mentioned in this video? AI ChatGPT …
ytc_UgwS20qDs…
G
The only thing AI stuff should be used for is when it isn’t for profit. Like a l…
ytc_Ugz2XkX29…
G
This is the first good, rational talk on AI I have come across on youtube.…
ytc_UgwtOEwPT…
G
He must be AI cause all the things he mentioned would also benefit him. Interest…
ytc_Ugz6NWA06…
Comment
My problem with the "philosophers will figure out whether this qualifies as 'reasoning'" line is that it still assumes way too much about what the models are actually doing, that "oh, yeah, maybe it *technically* might not be reasoning but hey, it's definitely in that direction". No, it's also possible that LLM chain of thought-style reasoning is ultimately a dead end, that it can sort of do things that look like reasoning but too much is fundamentally missing to lead to even human-level intelligence. Clever Hans wasn't a sign that you could one day teach horses to do math, he was responding to subconscious physical cues unrelated to any numbers, and it's entirely possible you can mimic basic reasoning with syntactical analysis but not anything more advanced, or being far too inefficient to practically do so, like building with bricks and no mortar.
And for the obvious counterpoint, yes, technology improves and things get better, but not always and not in every category; fusion and superconductors with reasonable requirements have been just twenty years away for decades, but it always turned out the challenges were much harder than expected. That's why I take so much issue with the above, language affects how we think about things and evaluate evidence, and handwaving "reasoning" as ultimately a philosophical point avoids confronting whether the thing is what we think it might be, or whether it's a very clever facsimile that can't succeed with larger tasks. Talking like LLMs actually understand anything, even with all the caveats in the world, predisposes us to evaluate it in those terms.
(I have a whole bunch of rants on these topics and the misrepresentations of what AI is actually doing these days, the aside about hidden states or "knowing it's being tested" being two others, but I have limited time and energy to put together a YouTube comment :P )
youtube
AI Moral Status
2025-10-30T20:0…
♥ 42
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1lxfTWilYllBJG5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4kHbcpJBOP46Ifl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7WniCkN-N8KLJgbp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxLv3EAXxRBQZrzcH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuNrHVIO76mi4l9al4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwhJHE0Xw6pRv7TYz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw33QRQLgC9LkVEuDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyaGTEsAQ1XU_TmzZR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2z9Qt1hW3GTC3v4V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwA7PYa6nANsdVzNGF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]