Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
His core claim seems to be that neural networks are intelligent — and states that predicting the next word requires genuine understanding. But if a scaled-up neural net is all you need, why didn’t we get intelligent behavior from earlier architectures? We had decades of scaling up perceptrons, RNNs, and LSTMs. None of them produced anything resembling reasoning. It took the transformer and its attention mechanism — a specific, non-obvious architectural innovation — to get here. That’s not “just add more neurons.” That’s a fundamentally different design. He also seems to hand wave away real problems with hallucinations. Yes, everyone makes mistakes and makes things up. But not to the level that you see an LLM do. Where it fundamentally gets confused about the most basic things. No disrespect to his massive contributions to this field. He certainly is a genius. I’m just left with lots of questions after hearing him speak on this.
youtube AI Moral Status 2026-03-02T19:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]