Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Generative AI is 100 percent 'bad". It passes on wrong information. And it is NO…
ytc_Ugz_b8DYm…
G
Non-disclosure agreements need to end. Black boxes must be open source and avail…
ytc_UgyzXzWsq…
G
One thing to not forget: AI writes incredibly good testing code. I pretty much l…
rdc_jpsidqx
G
Bro, it's just fast machines and vector/linear algebra. There's really no inspir…
ytr_Ugxvl0TsU…
G
another fun way to fuck with AI because Lavendertowne is a prompt, if she just d…
ytc_UgyjubtCT…
G
Good point. Idk if artificial analysis posted the cost yet, but just some extra …
rdc_ohx6hg7
G
This is a great talk, thank you for your passion and for always promoting Explai…
ytc_UgwDVfZSZ…
G
I think the root of the 'AI taking our jobs ' problem is related capitalism.
Ca…
ytc_UgwXPFPY9…
Comment
Question: The laws talked about in the video are if something happened to the AI, but what if the AI broke a law? What punishments would be created for the robots? Would punishment-reward systems be programmed into the robots? For example, if a robot did something 'good' then it would be similar to dopamine being released in the human brain.
youtube
AI Moral Status
2017-10-03T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhfUbtpxRpFCg2RbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxw29hCRkRXSp_1xQl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzxs_MaS9tOuE-ofU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZuMj4n3MDIkIG1ql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCNly2eYnFv9N7GZB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyiQyxfa4atkYseCmx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx7Lk0ES4Dp34m9F2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVymjGfAAf9ZSK9w14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz41nduqULPOKslKst4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyL7jLrsf5hxsujVlZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]