Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you got it, both are green house gases. Also a lot of local environmental damage…
ytr_Ugwxb8-mM…
G
I hate AI, if i f^cking lose my patience by a bot, im never using ai
Plus: i g…
ytc_UgzngJeL4…
G
Yes because a socialist, collectivist, centrally planned state stumbling upon AI…
ytr_UgyMieQWP…
G
He seems overworried which makes me think. If AI is something you are so worried…
ytc_Ugwb7jUmN…
G
I always say please and thank you to AI and I always felt silly but it was also …
ytc_UgxIRSC_Z…
G
According to Yan LeCun LLMs and symbolic understanding aren’t sufficient to know…
ytc_UgxzVsyMx…
G
I support ai and robots should have the same right as humans tbh I know I can tr…
ytc_UgxYBHtP3…
G
God cares about humans and AI will, and probably already does, know this! AI pro…
ytc_UgzNt7KZB…
Comment
"Because when we don't know what's going on inside there, we can imagine it's exactly what we want." (25:40)
Doesn't that give pause to entire framing of these LLMs as intelligent? Framing these black boxes as organic or having explicit agency and intelligence rather than describing text output inherently makes things more sensational than they need to be. Down to even calling them "smarter" or "dumber". In reality, they are all executing perfectly to the data and training provided. And jumping to intelligence as opposed to results probability math just plays into the marketing of these LLMs as AGI, even if you ultimately don't agree with all the marketing hype.
youtube
AI Moral Status
2025-10-30T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxxuL0rIDRv6S4onAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxV2YgRxgdc1F1hK-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxMcFp938sqEB2x6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx51tCuxt7S0BiUp614AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYohzxjxoYmuBkcrV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugy1pg6e_fFmqKOJTHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFZpLLvJEtoqFWd654AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwRwTJYJFvGhe5WBGd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhB5pcpXVKzCtGOUx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-BFV-_V6K0ci-9zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}]