Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
0:00 - 0:05 pedestrian walking in the road from very dark area, the woman didn't…
ytc_UgwRr4JA8…
G
this is actually sad. Like you cant even post your own art anymore without it be…
ytc_Ugwt_5uc0…
G
people only get erased like that when they’re close to something explosive.
Not …
ytc_UgxizfRSR…
G
The question presumes the existence of an "*AI bubble*", but that's very, very, …
rdc_n7yhqvt
G
@MonaLisa-yk6uy because ai art is going to flood the market, and drawn art will…
ytr_UgwAmZ-93…
G
so when AI is making and delivering your crap who are they delivering the crap t…
ytc_UgwLXw2W7…
G
for weeks this book kept popping up in my feed, comments, reels, blog mentions a…
ytc_UgxJI0OFE…
G
Nice to see the NYT take appropriate action. To save you a click, it looks like …
rdc_odieenq
Comment
His core claim seems to be that neural networks are intelligent — and states that predicting the next word requires genuine understanding. But if a scaled-up neural net is all you need, why didn’t we get intelligent behavior from earlier architectures? We had decades of scaling up perceptrons, RNNs, and LSTMs. None of them produced anything resembling reasoning. It took the transformer and its attention mechanism — a specific, non-obvious architectural innovation — to get here. That’s not “just add more neurons.” That’s a fundamentally different design.
He also seems to hand wave away real problems with hallucinations. Yes, everyone makes mistakes and makes things up. But not to the level that you see an LLM do. Where it fundamentally gets confused about the most basic things.
No disrespect to his massive contributions to this field. He certainly is a genius. I’m just left with lots of questions after hearing him speak on this.
youtube
AI Moral Status
2026-03-02T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]