Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that struck me is at around the 29 minute mark he seemed to absolve himself of any moral responsibility. Listening further on I realised he hadn't but it really hit me and got me thinking. Is lack of emotional intelligence/moral responsibility intrinsic in AI? Since, according to him he won't be needed, and AI would be having this conversation with Steven (if it would ever bother to spend time conversing with a human). I know some would say many people in power are morally bankrupt but a world with AI running things (which is essentially what he's speaking of) those morals would never have been there in the first place.
youtube AI Governance 2025-12-03T13:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzI4XAeJ73z7Tucfnl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhP1PwZQQC60suf3p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzvAtOa8RWHSwNPWH94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyB9BEtV4eLAtOlVU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9MTvW-QJoAXCbN_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3HRE-jtU7qNoxusl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTEEBputmyrobOynl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwhtMj68xVfuqCUn8h4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6mHegomCDazz7v1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugwt4c6Jx4f2tu77v3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]