Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the whole point of AI is to make tasks faster and easier so humans can do les…
ytc_Ugx8dVYSJ…
G
Great question. We see the same inherent training problems in human beings, part…
ytr_UgyZxCCc7…
G
Ai at it’s finest can’t tell the difference between a person and a box keep push…
ytc_UgzNUSbzf…
G
Just FYI it's this:
>WEF estimated that 7.1 million jobs could be lost throu…
rdc_cz2wy0b
G
Its good if AI gets smarter then human cos it will discover all kinds of drug, i…
ytc_UgxK58Xdi…
G
For those who don’t know the guy said to be wit no people and on the second pic…
ytc_UgybO4_gM…
G
This is like half of the reality of AI mixed with like herobrine Minecraft myths…
ytc_UgyvgFEzQ…
G
Like I Heard another YouTuber saying if every human made job is replaced by AI t…
ytc_Ugytg-LtL…
Comment
Most of this video is anthropomorphizing and mystifying well understood mechanisms e.g. 16:50 LLMs it doesn't have a "reflex", they just generate the most likely next token based on their training data. Also hallucinations are well understood in that you can't have a deterministic outcome in a machine that is by nature statistical. Someone whose bread is buttered by promoting the dangers of SI isn't a good guest for understanding the guts of how AI works. Great for the imagination and speculation, but a more skeptical thinker like Yann Lecun or Gary Marcus would be a better guest to establish first principles.
youtube
AI Moral Status
2025-10-31T22:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyqhd1ojsGCVvZgPlt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsTdoYt33NfZvZ-WV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6ApNsK2WqjPxpYqd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwL0JQHjbK4UODn7jF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8dhtkfIwBPGy6wbp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPCcy5NCD0BmewJep4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpQ7c_Q_2ku-3XfwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwd86KTL7vHqjcQwql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwanIugzMo42bsGlvd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwDbskoB4bN2da38SR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]