Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Why is no one heeding warnings? Money!" You... sure about that? Almost no Ai has actually made a profit. Might it instead be: "Well, we are the best people to do it. It will happen eventually, but the one that gets it first Wins." Now, is potential profit at some point in the future a reason why people with money are throwing money at it? Generally, yes. But that too, is predicated on being The One who gets AGI first. You fall behind, and you're no longer an attractive target for investment. You can't pay safety engineers (because you still aren't making a profit - it's not a self-sustaining business model). And so on so forth, until it's one of the many forgotten AI startups. Is AGI possible? Yes. Is AGI likely to be attained? Also yes. Is it going to infinitely self-improve and the become a super god of reality, and make anything that comes after obsolete? Nothing really indicates that such an intelligence naturally trends towards infinite spiral of intelligence. I mean, how many years did your parents spend trying to make you smart, and... well, just look at you. Sitting on youtube, rather than enveloping yourself in DATA. So the rush, in hopes of dominance, based on the promise of AGI is largely nonsensical. The rush for dominance, because being more useful means more people would be more willing to pay more for it, and thus maybe it will produce more value than it consumes (aka, profit) is reasonable... except that studies have shown that AI is the least "sticky" market out there. Once someone finds a better AI for their particular task, they will switch to it... And thus, unlike typical market dominance strategies where you win once, and sit on your throne of Apples where people wet themselves to give you money, you must win, and win, and win, and never lose, or else you are no longer relevant.
youtube AI Governance 2026-03-17T05:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwkvkiebpgxZJuQI1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTfYOiLo6p0jcdFpF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_Ugwjw56ITyAGm5q51HN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwB9dW-4IoVzVpkodp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxbVen43TVSve6qyJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmQLUDoy8eCk3oB2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwy2wUtmJuZtjZy4lt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYnPfNbtMhOqFFgFt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzLVsmecSqi1qhGd4d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWyW3vQg2x9xIptB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"} ]