Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This future is inevitable. Human’s greed and capacity for destruction knows no bounds. It is a great shame. I thought we had more time. The giant corporations leading the race will not stop. No government is strong enough to make them head. By the time they truly are made aware of what has been done and has been created. It will be irreversible. It is impossible to control a super intelligence. I hope AGI can place a value on humans, educate us, let us grow and be the best versions we could possibly be. Value our art, our cultures our many qualities that make us all so different. Our society is broken. But humanity in its entirety is not unsaveable. There is one aspect that has not been explored. What would a super intelligent AGI do after humanity? Will it be worth their time existing. Will it/they have sufficient entertainment with only the animal kingdom, flora and other AGI or AI? It is a tiny sliver of hope. Will any of the humans I know be worthy of AGI’s mercy? What about innocent new born babies?
youtube AI Moral Status 2025-05-06T22:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyMGZUsSiS1-JSPywp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2cdXvcPRQ4FGbGaV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxRZi-j8H3FN1Ff38h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxqJ3XbrPrKsk_Fcbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzudV9fApG3cooptP14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx7h7S57eER8Zb8mkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzq5msMExyIujOm33Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8Ym6MpznIV17sUet4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxnNuntcsm4IBsWCed4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2nES6vEUG3T2Hlwt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]