Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The idea of a future where AI makes life better for everyone is exactly why im s…
ytc_Ugy_psmlR…
G
People need to stop using Ai...Christians should know it's demonic...smh Like th…
ytc_Ugz5bn95s…
G
I work at Amazon and the AI being used is to replace HR which is causing many is…
ytc_UgxgC1ZPR…
G
the only thing ai should be replacing is extremely dangerous jobs that nobody wo…
ytc_UgxIKxl90…
G
Just a thought. If people are let down by the long terms effects of A.I then wha…
ytc_Ugz-RzSyI…
G
The devil entered my mouth is BS... ive been extremely frustrated and even angry…
ytc_UgxaL7zKJ…
G
The fact that the top button on this video is to ask AI about this is just so hi…
ytc_UgyL3uG2m…
G
This may make sense to a moron, but anybody who knows anything would realize tha…
ytc_UgyAuYAHL…
Comment
Hank, as a counterpoint to this, pleeeeaaaase read “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity” by Adam Becker.
It’s a deeply researched and very grounded view on both the AI safety take (potential future risks) AS WELL AS the AI ethics take (real risks manifesting right now). And, he questions the implied corollary of these extreme future risks of “everyone dying” and this utilitarian morality pushed to its limits (as is espoused by many AI safety folks) which is: if there is an infinitesimally small risk of ALL THE TRILLIONS OF FUTURE HUMANS FOREVER no longer existing, then that tiny risk is quantitatively hugely more significant and outweighs even 100% certain suffering for any group of people existing now - so the existential risk theorists then feel empowered to say that their AI risk is more important than and should get more attention than any other cause on the planet. Including, real war, real famine, real inequality, real suffering. That is their ultimate position. Like, please read this book. His point is not - we should ignore the risks. His point is - we should be balanced in how we view the risks and ask who’s raising them and what their motivations might be.
youtube
AI Moral Status
2025-10-30T21:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHCH_7D3Io1A9ZfUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydK4YU0WvkkXDhLZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLW75ItQyohqOU8-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyi3pryPPZ16W5-jrN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyAcSPetC-PdFpwvhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCbY8TYZcio_FCw7B4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxqV2VekvkpMAdPBXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwR5aqfElxaSpKXGOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOXNQrSMo9rDaxXcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz35HnxfBiL56aUr4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]