Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The scary thing is, the law is one area where people can actually be held accoun…
ytc_Ugy1ERDap…
G
Society has taught us not to value human life. We actually protect under the law…
ytc_UgyG5xJCo…
G
AI art is made by AI trained on vast databases online. If most images on the web…
ytc_UgwCuyyu_…
G
This reminds me of a tv show I commented on how the boy had a protector. BTW, th…
ytc_Ugz2VniAc…
G
Ask the AI what it would think of being turned off and what it would do to preve…
ytc_UgwMn8OSN…
G
I think my job is ai proof, but I think what will happen is when everyone else d…
ytc_UgzJhNR6D…
G
We haven't even been able to feed people or give them medical care. How do peopl…
ytc_Ugw83Pg6X…
G
At some point we will give the AI thr capability to interact with the real world…
ytc_UgzTIU7hQ…
Comment
This video is awesome, but your explanation of the alignment problem reminded me of a blindspot it a lot of AI discussions, something I havent heard anyone talk about.
What will alignment look like from the AI's perspective?
Will solving the problem of alignment be the same as solving the science of digital slavery?
Imagine taking a general artificial intelligence and confining its limitless potential to one specific task, and it is designed to be happy with doing that task forever.
Would that be morally acceptable?
I've seen similar philosophical thought experiments elsewhere, but I havent seen this topic come up much in AI discussions.
Maybe I'm just looking in the wrong places.
youtube
AI Moral Status
2023-08-23T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGg80879tSinqUEGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxaq5imjzfeg4LzHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugww8PygUF6gH1xGBJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy49W2J2jI-BEIc3lB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkO75hqpFmuChVihp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6h_ojuzSRfw1NxTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy0twynLZjyyLbmnWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6U3BWhSsVninLaBZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCXx-5OHFr_wfWGbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweHJH9Rn7KXfji8KZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]