Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question is obvious, of course you have to press that button. Why, the same logic prevails over the hypothetical question that looms over the entire discussion. Is there any scenario of upside that a responsible human should engage in that comes with a 1 in 4 chance of total annihilation? No? Then let's stop doing this. No? Then we push the button? Listen, it's not complicated - say 99.9% of us all agreed that this is potentially dangerous and we need true safety controls. There will then be one greedy monster that comes along and says ... nah, let's just do it and see where things land. Think that's far fetched? Have you heard of Sam Altman? Was he the first to create AI? Nope, he was just the first to say, fuck it, I am putting it on the internet, screw safety, I'm gonna get mine. There will always be Sam Altmans in the world and any concept of safety we come up with that does not account for the psychopathically greedy outlier will inevitably fail. It's not complicated, it just takes the strength to walk away from the potential benefits ... but then, that's the very definition of greed, isn't it. We struggle to walk away from benefits, and that struggle grow harder in proportion to the benefits. AI has potentially unlimited upside, therefore, the greed response associated with it will be equally potent and irresistible to the greedy. Well then, I hear you say, now that we know greed is the problem, perhaps we should only ever trust those motivated by other things, like the welfare of humanity. My counter to that is ... have you met capitalism? One goal, no soul? We live in a world in which the best way to distribute wealth and resources has been free markets, right? And in a free market system, enlightened self-interest is the driving engine. Well, what do we mean by enlightened? Our efforts to regulate self-interest is the content of that adjective, in this context. So where is the regulation here? Too much weight on the other side of the scale blinds us to the other side, which is our doom. And since we've replaced thinking with vibes, we don't want to hear this. In fact, we will enlist our considerable intelligence to build arguments against slowing down, but these are increasingly facile and transparent. The recklessness is obvious to everyone and the justifications are daily thinning, that's how greed distorts our perceptions when greed is the only quality in our collective driver's seat.
youtube AI Governance 2025-12-09T21:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxpAnm1wArZIs5ESzd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzhjn2cvQcAts7sAjd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7yCYxgzhxx5xsbsR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw4tPbusWUXaz1k3LV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFQ8XtFEGdqydkd7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwUx0Sxvw81TP1bEGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyVWiATY2b8j4abnBV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_0qeQVqoCsIuqnY94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzqZILb_Ec7AWFR_bJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuM7-XuTaN8jdZsfB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]