Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT can be incredibly helpful for programmers if you use it appropriately an…
ytc_UgwGZZE95…
G
True. Marketing has succeeded in corrupting the true definition of AI. All curre…
ytc_UgxHy9uIo…
G
But you do need AI to turn yourself into a Ghibli character If you can't draw…
ytc_Ugygn0DP7…
G
Not true. ChatGPT misheard what Alex said, like when ChatGPT defined a line inst…
ytr_Ugz-Tvg4P…
G
Can you sue a robot and does doing so robot lead to its demise aka shutting dow…
ytc_Ugyt4336a…
G
A.I. is dangerous. I really believe people should take it seriously. There's so …
ytc_UgwhxqAwi…
G
8:04 just pointing out studio gibli didn’t make this style, I saw it in movies,…
ytc_UgwtHJWfQ…
G
Hhh imagine two ai lawyers defending each side with the most logical way, gettin…
ytr_UgyDW3I_9…
Comment
The closest thing I’ve heard as a directive for AI moral alignment is:
“Act in such a way that an average human, who was smarter than you are and knew everything you knew, but with their existing values as agreed by majority human consensus today, would see no risk, whether direct or indirect, by the action”
Yes it’s wordy. It’s damnably hard to be concise while closing the most obvious loopholes, and it’s still probably not nearly enough. Incidentally a variation of that clause would be part of any deal with a devil lmao (and that is the level of care which should be taken when directing a superhuman AGI- treat them as nefarious and duplicitous)
youtube
AI Moral Status
2025-11-17T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]