Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fun fact: the "im not a robot" captcha tracks the mouse movements before the but…
ytc_UgxaN5LP1…
G
Whenever I use automation it tends to get the issue completely incorrect and act…
ytc_UgwcFagan…
G
If LLMs need a ton of energy, imagine what a really smart reasoning model will d…
ytc_Ugz3mgyN2…
G
I think you just don’t know how AI works. Yes, to an extent you are right they a…
ytc_UgyWRAEgz…
G
Right on with the K-12 education component. That's the first line of defense to …
ytc_UgxAURW7T…
G
the thing is you're not paying for a "bunch of pixels", you are paying for the a…
ytr_Ugw5UyDqD…
G
Regulations? With our corrupt governments, spies, politicians, and money hungry …
ytc_UgwJWHfmo…
G
Dude is very very smart tho. I give him that. But no,AI can never be sentient.…
ytc_Ugyg59Xy7…
Comment
LLMs are designed to be superficially convincing, the whole point is to convince you that they are operating at a human intelligence level through their words.
“Nobody understands how the AI works underneath” are the new “I think I can safely say nobody understands Quantum Mechanics”. There’s literally a whole subset of the AI industry & academia debugging these AI models and figuring out what kind of connections and weights are being used and how the data structures come out. Unlike the human brain you can have the AI software print a log as it goes in every decision it makes - which tokens it’s tokenized, the path through the network, etc. and anyone doing this knows full well how they work. But that’s not good marketing, “AI is sentient” is standard mysticism and anti-intellectualism.
youtube
AI Moral Status
2025-07-10T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyVshU967lWNT1W5vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxX5AuxhGTXFds5c1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyQSS19T4b8k9nVlOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzb452AO7ltEr6mjj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhQN9DeZt5ozPL7rJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzkys4fiGGHHPrTLdt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOqiQeELXmTEyLOGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8SsvWAGagM2d4Ee14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMf_8U9xkDX0tmBAF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJ5TOggBMe_m17RHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"})