Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey everyone — I really urge you not to take everything this “safety expert” says at face value. Some things in this video are deeply contradictory (and honestly concerning), and I think it’s worth calling them out: “Everything will be fine if we get ASI right” — That’s an unbelievably strong claim, especially when he also says there are no scarce resources in this world except Bitcoin. We all either experience firsthand, or can see others suffering from, the very real limits of food, water, energy, and other materials. “Bitcoin as the only scarce resource” — wtf? — That’s a rhetorical trick, not a serious argument. It oversimplifies complex global issues into a single speculative asset. It’s also wildly arrogant to claim such a thing. Personal flattery & weird metaphors — To the host: “You already seem to be winning — you're rich, handsome … famous people hang out with you.” That’s gaslighting. It shifts attention away from real critical issues and onto empty flattery. It feels manipulative. Dismissing “ordinary people” as NPCs — When the host asks about “ordinary people,” he replies: “Those are just NPCs, nobody wants to be an NPC.” Calling normal people “NPCs” (non-player characters) is dehumanizing. It sets up a false hierarchy: the supposedly “enlightened” vs the “oblivious masses.” I’m sick of that us-vs-them framing. It is dangerous and wrong, especially when used by someone claiming high moral ground. “I’m not religious, but I believe in the simulation hypothesis” — That’s not rational clarity; it’s just replacing one metaphysical belief system with another. It doesn’t free you from bias. It’s still totally speculative — basically just gambling with ideas. Bottom line: Yes, technology can be misused — it already is. And there are legitimate safety concerns with AI. But the deeper problem is how some individuals cloak speculative, even cult-like beliefs under the guise of “expertise.” That’s more dangerous than any specific tech. Wake up. Stay skeptical. Demand clarity and consistency from those claiming to be our safety guides
youtube AI Governance 2025-10-02T19:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9yDckNigTAhQ8ZyF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMJRV4BHdwHHpovgh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxdCqS6lJwVvlJJk5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUK-34_pa798aYlYJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBn1peIePKgkUuPgV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx4zaLw2Sa-kSPGfrR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyK_Xx76SWP-7Nn2CZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwui3xxrQcTbjBavP54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwppQue7KjNszMqfzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxh_bnOO0JB1MCxAWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]