Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do not understand his point because if no one (very few) is needed for contribution, what is the point of producing anything? Even if we arrive to the point that AI will autogenerate solutions better than humans, those solutions will remain useless if nobody can afford them. Will we have universal revenue? If yes, we will use it for shelter and food. Nobody will need an AI produced logo, AI powered bank services...nobody will be able to afford ai produced goods or services. Humans will go back to basics needs: cultivating, cutting trees, hunting... to survive. What is scary is the transition to that stage: people killing each other for survival, eventually reorganizing themselves to produce what is just necessary with their hands. And what will happen with medicine, contraception? I am here writing those words on the computer but if I could not afford any tech, I would totally be fine. Milliards can be too. And what about energy? What if we cut off everything? I am confident that the new generation will adapt and eventually reject AI by starting using human alternatives to most things.
youtube AI Governance 2025-09-06T05:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]