Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not like living from art is already difficult without having AI making art …
ytc_Ugy_fadUQ…
G
Why would the rich pay artists if there is a tool for them to do it faster and c…
ytr_UgzPfXMUQ…
G
Part of the reasons for automation is cost. Businesses see employees as a huge c…
ytc_Ugwo8X-bg…
G
It’s not possible that he does not know how extremely and insidiously Google is …
ytc_Ugw0YaZRx…
G
If the billionaires really want utopia, they will sell very cheap humanoid robot…
ytc_UgwB7mkNT…
G
Thank you for making this video Sam,
Any time I see someone defending AI pictur…
ytc_UgzuTFIEm…
G
Damn ai is mean to mothers imagine ai’s if they had mothers like nahhhhhhh …
ytc_Ugz9zqNpT…
G
Yeah I'm not sure teaching a robot how to fire a Tommy gun the smartest thing…
ytc_UgyICbP8O…
Comment
AI is potentially more dangerous than cloning. Cloning is legally regulated so AI should be legally regulated as well and restricted only to areas where it truly benefits human beings. All other areas where AI denies a human being should be heavily restricted and heavily guarded by ferocious force. Even by nuclear deterrence if necessary... Once AI and it's overlords know that their servers and all other physical infrastructure can be fried, they will behave themselves. They know very well there is no use of cloud memory or cloud software without body or hardware...
youtube
AI Moral Status
2020-09-28T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyh5kFrFAX76VP8r0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx4dcPvA4YcYf8d4eF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxV8PxFEwWjGcMYcOF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxCVDFqc6-aOr8MXZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOC0T-5w2wpallZ654AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxXRgyluZtDHml6XQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyHqnYF13ublJBgJAJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_njvJxA4SpHwtHE14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxJ4QSF9sQKKH7fIs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwrtg83GB2lOEQhqxx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]