Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Would they? Uhh, yeah. Look what every major corporation from Standard Oil & United Fruit Company, down to Coke-a-Cola & Chevron & Boeing & so on have been willing to do… Why would we think OpenAI (associated with narcissistic crook dirtbags like Sam Altman) would be on some kind of ethical pedestal?.. Even if you assume people like Altman are sincere in everything they say, & aren’t just cynically rationalizing whatever self-serving profit-maximizing behaviors they wanted to do anyway, with their whole “long-termist” bullshit pseudo-philosophy, even taking it at face value on its own terms… It’s an unhinged, frankly, extremist, ideology which could rationalize any number of horrific atrocities. Like, literally, according to this (Ayn Rand level) “philosophy” (again, assuming he truly believes in it), if one of these tech billionaires believed that by killing 7 billion out of 8 billion of the world population today, that that would, in the long run, enable their fantasies of some hypothetical future where there will (completely hypothetically, in their heads) be trillions of humans thriving & spreading across the universe, then they would think that was a totally uncontroversially, black & whitely morally good thing to do. It’s “ends justify the means” taken to the unprecedented extreme of “any means which, in my [delusionally overestimated] giant galaxy brain, I think could hypothetically lead to trillions of people living in the future are completely & utterly justified regardless of how much harm they cause to billions imminently.” It’s similar to the rationale of the most extreme Pol Pot ass variants of Stalinism, except without even rhetorically claiming to have any ambition towards abolishing the class hierarchy in order to achieve a utopian society— just expanding capitalism out into space & measuring your success in how many people there are, lol. So yes, it would be incredibly easy for him to rationalize killing one person in his mind. Especially if that person was a threat to his corporate profits— I MEAN HIS LONG-TERMIST “EFFECTIVE ALTRUISTIC” GOALS WHICH DEFINITELY ARE INTENDED TO HELP OTHERS & NOT JUST ENRICH HIMSELF PLEASE DON’T KILL ME.
youtube 2025-01-05T23:1… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzT223z-fazwDj5Voh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugws7w8VBvEhc1oaDIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw-TAfBKTJAlxnVlrl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxk-cfzl_lLhOYebiN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyf5WfyQ1ngfGzfvgN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyVmkrHH7DqiVK4YgB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvMCzruAt-Vi1XnoR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxYTPyGCoLe2C1iy754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz5L_2yfYHfFYjXGA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJyUqjr5RyYryytu94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]