Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that, for some reason, wherever you guys say "trillion" you mean "billion". English uses the short scale mostly, where trillon is 10^12 and billion is 10^9. In the long scale a trillion is 10^18, so that's even less likely. Also, I think it would be good if Washington DC watched podcasts like this, but I doubt they do. I think the problem is time. There is so much knowledge, points of view, research to consume and understand, especially around difficult topics like AI, that most people, politicians included, simply don't have the time in the day to do it. We're dealing with a problem where human time and brain bandwidth are the bottleneck. Would lawmakers make better, more informed decisions if they spent several dozen or hundred hours doing their own research about the topic of AI that they're regulating? Most likely. Can they do it? Almost certainly not. Maybe their advisors can. Do the advisors have the time and skill to pass that knowledge up to their principals adequately? Hopfully yes. Can they pass on in 10 minutes what they learned in a 100 hours? Doubtful.
youtube AI Moral Status 2025-11-09T12:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCfgXOWqj_QckvzY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1PcgxyRpO6yFePBd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgysOsgfV69frC13hlN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzwr_KSzvipseA0Au94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyl9pZdZa4uSa23sUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwB_5LjgvmB9LLCc3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy03wl9LdwnUgQDn0l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzURJ6yX_tzv56jRcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyySM7JDt6YFvJjZd54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8jelfArGzHzPt87F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]