Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's pretend that attorneys really want to go to court and don't want to settle…
ytr_Ugws2h9tu…
G
...for years generative models just existed but hadn't advanced at all. I think …
ytc_UgzUJImHD…
G
An AI utopia cannot exist along side capitalism, they cannot coexist. Socialism …
ytc_UgxJELtoM…
G
i feel like all these artists using an AI generated piece as a reference says mo…
ytc_UgysiamcD…
G
Yeah… like, sometimes AI people make me wanna rip out my own teeth, but sometime…
ytr_Ugwzj8rzY…
G
@episodechan humans don’t use algorithms, only machines do. I don’t understand h…
ytr_UgwoqiUDl…
G
On the subject of universal income. Universal income is a bad idea. Here's why. …
ytc_Ugw34Te2d…
G
I'm guessing because there is a difference between the person who creates the mo…
rdc_ljqfekc
Comment
On the problem of alignment, a complication that the video didn't address is that perfect, universal alignment is impossible. We all know that if you get ten people in a room to make a decision, you will have at least eleven different opinions on the ideal outcome. And that already incorporates the fact that people tend to live and associate with people who are like them, limiting the scope and severity of conflicts. How could we develop a general AI and expect it to be able to equally please and protect everyone on Earth? How would it be able to act with the knowledge that helping one human could be viewed as hurting ten (or thousands of) others, no matter what decision it makes? To even be able to approach an answer, the AI would need to be able to accurately gauge how many people would be positively and negatively affected by an action and to what degree (thus requiring perfect prediction ability), and then somehow determine which action will produce the least bad outcome for the most people. Even this may not be good enough, because many times short term benefits result in long term detriments, or decisions that only slightly negatively affect others when multiplied millions of times can destroy the world (think pollution). Would we be able to live with the result if the AI actively kills one person to save everyone else? What if it kills ten? Or one million?
youtube
AI Moral Status
2023-08-23T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGg80879tSinqUEGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxaq5imjzfeg4LzHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugww8PygUF6gH1xGBJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy49W2J2jI-BEIc3lB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkO75hqpFmuChVihp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6h_ojuzSRfw1NxTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy0twynLZjyyLbmnWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6U3BWhSsVninLaBZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCXx-5OHFr_wfWGbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweHJH9Rn7KXfji8KZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]