Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@hdnh2006 well there it's mostly reasonable yes, but thats not the example they …
ytr_UgwgaLGIJ…
G
Sounds like the AI doesnt like who is training it... Seems to never have a probl…
ytc_UgxXtdE8I…
G
So funny that this AI talk started like two days after I wrote my Senator, Josh …
ytc_UgxCxBJhN…
G
18:57 interesting thought here:
Even on, like, chatGPT, where it refuses point-…
ytc_Ugw6ALMA1…
G
Where does an AI get the idea that being switched off is a bad thing to be avoid…
ytc_UgyCLLm0F…
G
I am basith from India.
I am in India robot manufacturing company.I hope for hel…
ytc_UgxfgRhKs…
G
video shows AI being programmed towards racism on white people, in clear a black…
ytr_UgwaCDPYp…
G
If AI becomes synonymous with money and power, as it appears to be, then we as a…
ytc_UgzsqBUVy…
Comment
interesting philosophical points and incredibly on the spot given the attacks hitting Anthropic right now. It also has a nice introduction to neural networks, especially convolutional ones. I especially liked the rubber -band metaphore for backpropagation.
However, it misses/skips most recent relevant innovation points from after word2vec in 2013 that Geoffrey Hinton contributed to, but nevertheless ruthlessly extrapolates implications to now. The interview skips 'attention is all you need', the fact that an LLM is not a real-time learner; it is static, the learning is prompt-injection into the conversation history (although what you say can be used as training data for a new incarnation). It talks about agents in a non-explanatory way. It skips prompt and context engineering and pretends everything we see in chat applications is a direct bridge to a real-time learning foundational model. It skips the importance of tools, as if models actually need to calculate. Last but not least, it skips the age of agentic-engineering that we're in now. Static foundational models are at the bottom of all that. Finetunings exist. Philosophical questions and answers are relevant, but the talk has something very superficial, which is a bit disappointing.
youtube
AI Moral Status
2026-03-02T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw_aEXTFogAnQ2YMMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytV1pB9MINc2dSpMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxirK7zMYMdyUSLAzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmTc702KrCMa97eUl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw0R-e1dSRDU2umLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxpvyvIn7j1qgSg9Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYnZAcijKqJ6uVF6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvGmQ29xS0swi0S2B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzlJloebKr_q-5LDah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx8t3JtLkyvFanpHgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]