Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm kinda sad you decided to give so much attention to a speculative danger that is born out of the same mindset of the people who push for accelerating AI development, when instead you could've talked with someone like Karen Hao, Timnit Gebru, Milagros Miceli, Abeba Birhane, Emily Bender, Paris Marx, anyone from the Weizenbaum Institute, or any other of the many professionals who have been warning about the real and current problems of AI and specially GenAI, and who unlike Eliezer are not egotistical delusional men of the likes of Tegmark, Bostrom, Musk, Altman, Hassabis and such, despite him and Soares trying to portray themselves as the true "rationals" of the lot. I hope you try to complement this with something on that more serious, more sensical, more grounded way of approaching this topic. I might be misjudging because I cannot bare to finish this video, the amount of anthropomorphization and nonsense I'm hearing is, coming from this channel, depressing. I got used to hearing it on Twitter and other forums and maybe it's not fair for me to ask of you to have studied and thought about this as much but... yeah... I just can't take it. Really, really hope you get to also discuss this with some of the people I mentioned and see this from a different perspective, because we're indeed in a very crucial time in history and using this lens to analyze it, the one that puts "existential risk" at the front when that has no place outside of just playful speculative pseudo-philosophy, is not helping at all.
youtube AI Moral Status 2025-10-31T05:2… ♥ 19
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzADEruI7xIw6O1WRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxn0k2lRfGk4nA-ecF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMp89_XuhPbAoHmCt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyzXi8NaJ9uLLUo1-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwf2zF_xWkRggRi-X94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwf61r1ZPvN4FNn15R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw3J16bImy6ta8Z7Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx8BIi97tnFC3odK-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCew28aVKgspVVYbx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgysnHNcy2glii3L9Cx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]