Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This interview is beyond fascinating and incredibly scary (currently half way through) I’ve so many thoughts and questions, but just one I’d like to ask at this point… in a future where humankind isn’t extinct and AGI + humanoid robots have replaced all our jobs/work, how will we earn the money to continue to contribute to the economy? I ask this because this is not the first interview where AI CEOs are described as being in a race to be the first to create AGI regardless of the quite possible end of life as we know it, because of ego, a god like complex, to be in the race rather than on the sidelines, and because it will bring incredibly lucrative wealth. But if we aren’t working, and are no longer living in a capitalist economy, where does the money come from? What money? What is the point of these AI CEOs racing to finish if money isn’t even going to be a thing? In a scenario where we haven’t all died, and perhaps are learning to live again, learning a new way of living, what kind of world economy and culture might we have? So many questions, and as frightening as it all sounds I’m drawn to learn more. ASA mother of 2 boys, aged 16 and 13, what future do they face and how do I best guide them….
youtube AI Governance 2025-12-04T10:1… ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwMB6adlgGwCYF5SDp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyfM5DpuSJTF9lyWVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2weMyEt2pmClI1Jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyESV92ACkdlkdGgFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzHC9Cr8SZMl3TfAQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxi0ogPokaBNKFNwXF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxnhN12hVwJiTsk6Ep4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx2lHUQ5TN3FDrErvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJXr23znpn76o7LO14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgymMC_e3OWURjfYDex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]