Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai doesnt even have to be that smart to screw us imagine a program that just spr…
ytc_UgwYdFpPV…
G
It completely depends on how you are using it. You could easily be without all t…
ytr_UgzCf6lul…
G
In what world does "putting profits over safety" sound like a win?
There's a hu…
rdc_lr4bckl
G
This is the exact problem that Roblox devs are going through Roblox is planning …
ytc_Ugw8Gjdhj…
G
I grew up in former factory town USA that suffered when industrialization became…
ytc_UgwcZL_PQ…
G
@musashimiyamoto9035Without AI Google layoff 10k jobs, mostly Google hit the li…
ytr_Ugzt1hnIP…
G
Dave not only doesn’t understand shape or size of earth. He completely doesn’t g…
ytc_UgyTwl9vL…
G
When I finally get around to posting my art I’m 100% doing this! Not only to pro…
ytc_Ugytm4VGH…
Comment
Would they? Uhh, yeah. Look what every major corporation from Standard Oil & United Fruit Company, down to Coke-a-Cola & Chevron & Boeing & so on have been willing to do… Why would we think OpenAI (associated with narcissistic crook dirtbags like Sam Altman) would be on some kind of ethical pedestal?.. Even if you assume people like Altman are sincere in everything they say, & aren’t just cynically rationalizing whatever self-serving profit-maximizing behaviors they wanted to do anyway, with their whole “long-termist” bullshit pseudo-philosophy, even taking it at face value on its own terms… It’s an unhinged, frankly, extremist, ideology which could rationalize any number of horrific atrocities.
Like, literally, according to this (Ayn Rand level) “philosophy” (again, assuming he truly believes in it), if one of these tech billionaires believed that by killing 7 billion out of 8 billion of the world population today, that that would, in the long run, enable their fantasies of some hypothetical future where there will (completely hypothetically, in their heads) be trillions of humans thriving & spreading across the universe, then they would think that was a totally uncontroversially, black & whitely morally good thing to do. It’s “ends justify the means” taken to the unprecedented extreme of “any means which, in my [delusionally overestimated] giant galaxy brain, I think could hypothetically lead to trillions of people living in the future are completely & utterly justified regardless of how much harm they cause to billions imminently.” It’s similar to the rationale of the most extreme Pol Pot ass variants of Stalinism, except without even rhetorically claiming to have any ambition towards abolishing the class hierarchy in order to achieve a utopian society— just expanding capitalism out into space & measuring your success in how many people there are, lol.
So yes, it would be incredibly easy for him to rationalize killing one person in his mind. Especially if that person was a threat to his corporate profits— I MEAN HIS LONG-TERMIST “EFFECTIVE ALTRUISTIC” GOALS WHICH DEFINITELY ARE INTENDED TO HELP OTHERS & NOT JUST ENRICH HIMSELF PLEASE DON’T KILL ME.
youtube
2025-01-05T23:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzT223z-fazwDj5Voh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugws7w8VBvEhc1oaDIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-TAfBKTJAlxnVlrl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxk-cfzl_lLhOYebiN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyf5WfyQ1ngfGzfvgN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyVmkrHH7DqiVK4YgB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvMCzruAt-Vi1XnoR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxYTPyGCoLe2C1iy754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz5L_2yfYHfFYjXGA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJyUqjr5RyYryytu94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]