Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dude, no. Just no. Hank, please. Take a few minutes and look up who you're talking to. Nate's group MIRI has...issues and conflicts when it comes to exactly what they're doing. Let's leave aside the massive totally anonymous crypto donation, or how they don't share what their stock portfolio is. that's all...totally not suspect. The single largest funder of MIRI is Open Philanthropy. The founder of that was on OpenAI's board as he pushed 15 Million dollars into them. At the same time they were pushing twice that into OpenAI. That cofounder has now become part of Anthropic (Claude). He might have gotten that gig because his WIFE is the cofounder of Anthropic. He met her when she was VP of safety at OpenAI. Or maybe it is because he was roommates with the ACTUAL CEO of Anthropic for all those years too. MIRI is controlled opposition for the AI industry. MIRI is in a coworking space in Berkeley - Constellation. In it are all the spinoffs of MIRI. Oh, also Open Philanthropy. Leaders on Deepmind at google were research advisors at MIRI too. META the same. It's all there. Think of it like this. Let's say it is all a lie and none of them actually have any justification for a single things they're saying - from Sam Altman to Musk to anyone selling LLM's as the wave of the future (which they aren't). The last thing you want is opposition that is saying 'actually this won't do anything they're saying, it's all bullshit'. No. What you want is someone saying 'OH MY GOD THEY ARE ACTUALLY UNDERESTIMATING THE POWER!'. It's the oldest con job there is. And it works on smart people too. If they get you to think of all the possibilities of IF, not how, not what the fuck they actually are making. Not that it's all bullshit. No - it's possibly the WORST thing ever, we totally promise. It's either a utopia of machines or man! THey certainly aren't liars!
youtube AI Moral Status 2025-10-31T04:4… ♥ 11
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3NIxdxGErPKlR4_J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxfabomi6WpLluDaoh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyw6X-4WhvYZJGB9cd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcGhShbTjpI9iCxVJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugyj3Zm1SkHxhm2Dxjp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6esCnwN3wojGUgMd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw4cHE980ABQW3habl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzHPF2C-NLGa2-1A-V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxi-ISDYLVIFj8xpxd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyof_8V4WozEsKCDaN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]