Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are frustrating takes. The story of the boy's suicide was tragic, yes. But, it's been magnified and discussed as if it is the single representative artifact defining OpenAI's models. OpenAI's got some shitty PR team cuz they are definitely the "bad guys" of the top AI labs. Dario Amodei *discusses* AI safety a lot, but Claude is the more dangerous model by a long shot rn. It can--without much effort--get to writing an entire malicious codebase in a prompt or two. Chinese hackers are already using that "feature" of the model in organized cybercrime. You know the scale of repercussions along those lines? ...At scale, easily accessible, and high-quality malicious code is a greater magnitude problem by many orders of magnitude. Getting the GPT reasoning models to do that would be far more difficult. And their models are generally harder to trick into bad behaviors than Claude or Gemini, even in topics unrelated to code. I think Claude is a fuckin' awesome model with the best personality and incredible capabilities. Yet, it's still a weird, almost propogandistic narrative going around that somehow Anthropic's more concerned with safety and OpenAI is the careless greed factory. No, Anthropic's just got better PR. Damn, even my man Geoffrey Hinton sees it in a way I view as patently wrong.
youtube AI Governance 2025-12-30T11:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwS7ZaYErYtb2o844B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzed0s3l6lWq0KyhI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwuYTzyHYJzGyljuPd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxmTOCRb1tlaDWFxNR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzjEfw4jMnyn28cevN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnkJwzlRZz_Wiz1SZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxRbbrybpv42njlQqd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwlJtsxGhikR0n9xDN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgwdFbKxnavTBJazV6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTpFe0HyBispJeo654AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]