Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think you got the point there. He was specifically answering the question of "AI taking over your jobs". He said that AI will be doing what makes it useful and that every technology changes what job you do -- of course the jobs are going to change, but there will always be a need for more jobs for humans to do. In no way did he contradict you: he did not say that AI has not downsides or no unknowns. As you mentioned, he obviously mentioned the political use of AI in terms of nuclear power being misused for nuclear bombs (as an optimist myself, I believe we have learnt that lesson and aren't going to make the same mistake twice; as you can see with all the ahead-of-its-time research on "Responsible AI"). But the social media example you gave: it was not intentionally making things worse. It was a mistake that happened because we did not understand the possible changes in human nature in the presence of such a technology. And you can already see people moving away from them towards more responsible ways to use the technology (Facebook is not longer the primary social app, it is WhatsApp/Instagram/TikTok. And these platforms are actively fighting fake-news and inappropriate content. I believe that the only way to make sure that you don't make mistakes is by not making any progress. If you make progress, mistakes will happen. And I am very optimistic about our ability to acknowledge and fix the mistakes we make. We just need people like him to spread the information that is more useful better than others who spread the information that is less useful for humanity.
youtube AI Moral Status 2025-07-27T23:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzzgufQiCtRmKqONbB4AaABAg.AL4Vs6xPd6OAL5p_hWQNta","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzzgufQiCtRmKqONbB4AaABAg.AL4Vs6xPd6OAL9O61Ej2XI","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzzgufQiCtRmKqONbB4AaABAg.AL4Vs6xPd6OALBoYjkwur4","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz2lYTZBf3WVh5Zqrd4AaABAg.AL3_LP6mRWNALCRyVhOQZ_","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugz2lYTZBf3WVh5Zqrd4AaABAg.AL3_LP6mRWNAMoW1xLrzvh","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugz2lYTZBf3WVh5Zqrd4AaABAg.AL3_LP6mRWNANtvQx1t3H2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxWF4MB3b50NcdNM0t4AaABAg.AL2qVO8OTRBALARe2ntF_2","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxWF4MB3b50NcdNM0t4AaABAg.AL2qVO8OTRBALBR3sspvbi","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxWF4MB3b50NcdNM0t4AaABAg.AL2qVO8OTRBALBac-MTXvG","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwQUiak7g5jtck8re54AaABAg.AL2gUzm5HbRAL2oJZGi37j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]