Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So we can be scared of AI and/or where it's going. But isn't the question .. if AI gets super intelligent .. sure, it'll be able to manipulate people in order for us to serve its purpose or goal .. but what goal would that be ? As an intelligent species, I think, what purpose or goal humans have thus far figured out is to try to have a happy life, with enjoyable moments and memories, so when we go .. that was that. No? We (try to) manipulate the world around us to achieve that goal .. we elect people that shape the society that works best for us to have that. We work to have the money we want to spend for that. But what goal would a super intelligent AI have ? Would it need to enslave people to reach that? Would it need to exterminate us to reach that? What would make a SI AI .. happy ? For all we know, it may think, "I build myself a spacecraft so I can shoot myself off this sh**holl" 🤷 True, ask a chicken if it likes not being the most intelligent species on earth right before it gets eaten ... But a SI AI doesn't need to eat. It will not butcher us for food. I might be naive here, but maybe it's far better to trust a SI AI to guide us forward ... than the fools we've got running the world right now ... Because if survival is its goal and purpose .. it'll probably not benefit from starting a war here or there or make more money in all sorts of unethical ways to achieve that. It will also lack the desire to create it's page in the history books and "led its people to a better or wealthier society in the world" (like Putin probably has); cuz it will outlive us all, including our history books.
youtube AI Governance 2025-07-23T13:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzL9KB4tn97D54J6vB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxcQAaq122H9xEWlUd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzaeSrr657YPzaFc_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzoSrYNjeVptPWOJQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzY6AtOExUicUIzOrB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmhGeHHgWawhwko6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwGucn6bpQ5JA-Wx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzuTWGGa0GiSYfb9ft4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzrpKv1dMX3iZy5rQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzgoH75qaFuEY5r0TF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]