Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For the sake of argument, let's say Hinton is right. Does anybody really think that humans are gonna do anything other than let capitalism and greed drive the ship? We haven't done anything about climate change or war and did a really bad job with Covid. And climate change, war, and Covid don't provide major benefits for society. With AI, there is a lot of upside as well as possible catastrophic downside risk. Because there is more upside than downside for AI, does this make it more or less likely that the world will come together to prevent the dystopian scenarios? That's a rhetorical question. Hinton thinks a catastrophic scenario has 10 to 20% likelihood. Doesn't that mean that there is a 80 to 90% chance of a positive outcome? I'm not giving humanity without AI an 80% chance of a positive outcome. I'd say humans have about a 50% chance of arriving at a catastrophic scenario and only a 10% chance of reaching something that is more utopian. So, I'm willing to take my chances with AI.
youtube AI Governance 2025-06-17T02:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfElQUUJVqYyr3GcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIk7885sGjlCvUH214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQoqHtO5fYyIrFQo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXpf1_N3uyU9LkjPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGnqcg2l3mg-NU7H14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyc-APfwhZ7m0d0kbF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-om2P64X4YBYLYmV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwW0wlgRG8I6PHtYRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxMqA8e27ImT3G6Pmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQKkmL0KD52WMl6it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]