Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Somene asked me to recommend a specific DaveShap video that touches on this. There isn't a single video that exhaustively covers only this, but some talk about it can be found in a video titled "OpenAI Announces STARGATE and OPERATOR! Welcome to the United States of Acceleration!" starting at 13:10; transcribed: Addressing AI Safety Concerns There’s little evidence to support the fears that AI is intrinsically dangerous or power-seeking. And so, what I’ll point out is the lack of evidence. Number one, there has not been any scientific consensus or papers coming out demonstrating that AI is intrinsically always power-seeking, that it is uncontrollable, or that it is intrinsically evil. None. Zero. Now, there have been a few papers that have come out which say, in this one circumstance, we were able to exhaustively test it to failure, and we were able to create a situation where the AI tried to reprogram itself because it was following our other instructions to "do this at all costs." It’s really, really disingenuous how hard you have to squint at some of these AI safety labs to say, “Oh look, it tried to escape.” It’s like—it didn’t really try to escape. You gave it a superseding instruction to do something at all costs, and then it tried to escape. But that was not intrinsic to the AI model. That was humans creating a situation to try and catch it red-handed. It’s really dumb. Now, we have a lack of evidence for this "evilness," and some people might say, “Well, absence of evidence is not evidence of absence.” But in this case, it kind of is. Because we have a preponderance of evidence to the contrary. We have literally hundreds of papers coming out saying things like, “Oh, here’s a new training scheme. Here’s a new safety scheme. Here’s a new training paradigm of how to get AI to do what you want it to do. Here’s another one to fix jailbreaks. Here’s another one to fix hallucinations.” So, we have a mountain of evidence that AI will do what we want it to do, and a dearth of evidence that it is evil. So, yeah, the burden of proof is on the doomers, and the doomers have not delivered. Therefore, the doomers are just kind of getting put into the corner. That’s my rant.
youtube AI Governance 2025-09-24T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzRuxBKwd32HjCf-xJ4AaABAg.ANWkwmnaVfMANYhDg0udXM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgybmG8fpQ6I4nLDnyV4AaABAg.ANWTuDR6GIYANcFIPIEQNx","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw0-Z-qKhC2yAqRkzh4AaABAg.ANVn66iWYXVANXsXAxrchu","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyWudMJce2xfWVDdB14AaABAg.ANTk3m7ltqqANYh4VdN0iE","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyvszysOPULvlnHPR54AaABAg.ANSB3slkUW3ANSPwQKp01z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyvszysOPULvlnHPR54AaABAg.ANSB3slkUW3ANT-5FcDSxq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyvszysOPULvlnHPR54AaABAg.ANSB3slkUW3ANTXyXBnTd3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyvszysOPULvlnHPR54AaABAg.ANSB3slkUW3ANWBcvix6_U","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugz2nWBFqk2uj9sWM7l4AaABAg.ANS2zHlrNhfANS8elLfLIt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwbk90fqZJ6mGXgcmh4AaABAg.ANQv22eQpg3ANdTJxpvaEV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]