Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish shad actually continued on his artistic learning journey, honestly had so…
ytc_UgwsS5muq…
G
What real difference would it make to have external indicator to show that car i…
ytr_UgzQPKe63…
G
I'm not leaving king AI too much. This looks too real. Techs are getting better …
ytc_Ugx8pf8wd…
G
The irony in that there’s an ai summery of your video right below it on the home…
ytc_UgzkZKY1b…
G
There are ways to protect your art if you upload it. There was something called …
ytc_UgxnJNyuv…
G
This video provides a comprehensive guide to AI governance, highlighting the imp…
ytc_UgxbU9z1L…
G
Its great but no one will stop now that the us govt is putting so much into ai. …
ytc_Ugy8qS20f…
G
There is a cosmic plan and like it or not AI is part of this plan... you can swi…
ytc_UgzqELDFg…
Comment
one thing that disturbs me about the alignment problem that y'all only kind of touched on with the discussion about the sucralose stuff (and its basically pure philosophy so i'm sure it was outside the scope of your discussion) is that in order to even begin aligning an ai with our desires, we would need to first understand them ourselves, which i don't think we're really very good at. i think sucralose, doritos, and oreos are great examples of this. our motivations in any given moment are complex and we often don't even understand them ourselves. the fact is we have lots of different goals in mind when developing foods, only some of which relate to nourishment, and importantly some of which we are often not honest to ourselves about. this is a relatively simple example but there are much more complex, and even contradictory, ones out there.
that's not even to get into the ways in which we are often knowingly self-destructive. like how do you teach a thing that thinks in hard numbers and mathematical algorithms where to draw the line between the value of human life and the need to endanger ourselves for momentary pleasure, especially when every single one of us draws that line somewhere different? we still can't even systematically express our own beliefs there so how could we ever translate that to an agi?
noting that we have a long history of failing to get even people to align with each other, it seems to me unlikely we would ever be able to align an agi in any way that didn't just mirror our own existing prejudices. even before that though, it seems to me there's a decent chance our motivations are too internally contradictory, contingent on perspective or experience, ill-defined, or even just ever-changing to the point that it's not even actually possible for us to align an agi at all.
youtube
AI Moral Status
2025-12-22T00:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxThS4ajTzdmbPhgd54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzwLfNfzsKT_cI5DrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaItbAkUtzbepUd554AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwq5g8rcvOi4hrbOXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCNCt-ksMFts7oPRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwjc46jO8ndMCXFn9d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHz2BD-bcTNtTpKr14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyv21qBbsdzbbhpW6d4AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNQQSE9i7l7JrZeKp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzAw-O5aJKft83lsAV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]