Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have been working very closely with AI every day for the last 3 years. I just would like to say that you should be very careful taking advice from any 'expert' when they are rendering opinions about things the ultimately do not know about. AI is a frontier science and AGI and ASI are within the perceivable future, however no one has definite knowledge of what 'will' happen. Stuart Russel is a thoughtful, strongly ethical man, however in the interview he has misreported information or referred to events which supports his opinion... or misrepresented facts (like the 30% chance of AI ending humanity) because they support his internal beliefs. No fault to him for this... as we are all looking for evidence to support our own beliefs. I just want to say that he really really does not know the future any more than anyone else. Also if you log over 500+ hours with AI systems... you will come to understand how very far away it is from real human intelligence and how it really is far far far less dangerous than things like cars, or processed sugar (both of which killed far more humans in the last three years than any AI system)
youtube AI Governance 2025-12-30T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzZ8GaPVkMJQ-rimPl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzSJz5maN0Gsxli-S94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1Qs0L7a6DC7NwS854AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcUZyNkTj2OKu-WEV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpIedcG7bzCC9xWaN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyDqq3EV1U4DxtHe1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxDD19HymOoY65PoVh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3ge_flAvjoEQtdc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwiq1ku6e-Tp-ze1Rh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgySASjZ1SJBRHOLqiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]