Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
His opinion on AGI is ridiculous. You think nobody would want an AI that can do anything a human can do? Really? I get that, on a personal level, I don't want an AI to do the work I enjoy doing. But that is irrelevant, because nobody will pay me to do what they can make an AGI system do for pennies on the dollar. And that will not be just normal jobs, tech jobs, or other things that are affected already by normal AI. AGI means I could get my own personal equivalent to Neil to directly chat with and produce educational content tailored specifically to my interests. But, more importantly, AGI means that all AI research can be done by AI. If it is *truly* AGI then this will mean we get thousands or millions of clones of a top AI researcher working 24/7 365 to improve AI. This will then either exponentially improve AI capabilities, or it will at least improve them to the highest achievable superhuman level of ability that is possible. The ramifications of this are not limited just to a world where AI does all work. If we screw up aligning the AI we could get Skynet, or even a good-willed AI that still decides to painlessly eradicate us just so it can access more resources. In the meantime we even have to worry about non-general AIs that have super-human abilities. These will be close to being literal genies that may grant your wishes in horribly unexpected ways. Neil is actively harmful in spreading this kind of uninformed opinion about the concerns over AGI (and AI safety in general).
youtube AI Moral Status 2025-07-23T22:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyOCcuYRBa_UP4iEpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyUttYxgTpWbmHuZQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz1sHZ2rdkyoKze81J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwLWZ0ZLgVABsHXYA14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyot18IMiP92ltSZnR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwUR9uCosZESlNQfaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgwpRrdZswPcFyL4UJF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwU6RR4WFoMV6ZhAjh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx8TRj9S9k8xSSqMep4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugxc47eHMcOWgv-I62d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]