Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
interesting philosophical points and incredibly on the spot given the attacks hitting Anthropic right now. It also has a nice introduction to neural networks, especially convolutional ones. I especially liked the rubber -band metaphore for backpropagation. However, it misses/skips most recent relevant innovation points from after word2vec in 2013 that Geoffrey Hinton contributed to, but nevertheless ruthlessly extrapolates implications to now. The interview skips 'attention is all you need', the fact that an LLM is not a real-time learner; it is static, the learning is prompt-injection into the conversation history (although what you say can be used as training data for a new incarnation). It talks about agents in a non-explanatory way. It skips prompt and context engineering and pretends everything we see in chat applications is a direct bridge to a real-time learning foundational model. It skips the importance of tools, as if models actually need to calculate. Last but not least, it skips the age of agentic-engineering that we're in now. Static foundational models are at the bottom of all that. Finetunings exist. Philosophical questions and answers are relevant, but the talk has something very superficial, which is a bit disappointing.
youtube AI Moral Status 2026-03-02T18:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw_aEXTFogAnQ2YMMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytV1pB9MINc2dSpMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxirK7zMYMdyUSLAzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzmTc702KrCMa97eUl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw0R-e1dSRDU2umLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxpvyvIn7j1qgSg9Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwYnZAcijKqJ6uVF6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvGmQ29xS0swi0S2B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzlJloebKr_q-5LDah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx8t3JtLkyvFanpHgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]