Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@TheDiaryOfACEO I am at 1hr and 9mins. So perhaps this is discussed at the end. But there are two things I didn't hear discussed. One is power, as in electricity. I was under the impression the AI companies were scrambling to build their own electricity plants as in solar farms, and nuclear reactors. Let's say someone brilliant and psychopathic had the complete plans to build a super intelligence today, could it actually run today with the electricity grid we have today and the power generation infrastructure we have today? Even if it was distributed, wouldn't it still need more power than the world can generate all over the world today? The other question I had was the 3 Mile Island question. The disaster on 3 Mile Island set back nuclear power in the US for some years. So before we get to super intelligence what are the chances that there is some "little" AI disaster that kills say 500 million people rather than 8.5 billion. Would a "small" catastrophe be the thing to wake people up en mass to the AI threat. Does your guest believe there are those who believe in the AI threat so strongly that they are trying to create a multi million person AI caused death event just to wake humans up to the bigger threat of superintelligence?
youtube AI Governance 2026-01-14T04:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-ZBG3axn58fkIReZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxFgonlX0YmUfvEuzp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRQuX_y4MdLJNYPQx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzbcmFIjP_c4EMU8-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyhVMMUm0TS15rAe8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzXTlWW9mZAnyhkX3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8YUS5ar3zBBD2HeJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzuq8vMtXIwwu1OJuN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzlEndQjuixp13yd4R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwXRooTpInvoqGRVvJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"} ]