Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
a decade ago, when he was still seen as the real life Tony Stark, elon musk said the greatest threat to humanity was AI. then somebody showed him how much money he could make and all that concern went right out the window. maybe we have nothing to fear, maybe someday we could look back on how we made a big deal out of nothing, like Y2K, i would love that. but to cast aside all caution as "foolishness" and just laugh things off with all the adorable "chicken little" analogies is a far greater danger. i've read all these comments and it breaks down how this does that, and it doesn't do this... blah, blah, blah. you don't know, no one does, good or ill. right now, the most powerful AI's that have ever existed (that we are aware of) are in the hands of people like elon musk, Xi Jinping, donald trump and his leash holder, vladimir putin. is that the firewall that's going to protect us? are those the hands that will save humanity should our worst fears come to pass? if you have complete faith that those people and others like them will act to the benefit of the us all, then you are 100% correct, AI is not your biggest problem. for the rest of us, the problem is not the (add subject here), it's not even the people who create it. the problem is the people who will use it to serve themselves and will have a nice little escape plan should shit go sideways. leaving the other 99.9% to work it out amongst yourselves like the man said “your scientists were so preoccupied with whether or not they could, that they didn’t stop to think if they should.” no one listened, and they never made another good Jurassic park since
youtube AI Governance 2025-08-28T02:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzfN6h6mQ6kqX-hqfN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2rXN-clc1vmHT_-N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNVwzgAWgRqbj1S4h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5Ai4xn2qCXGTnegN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzmMWoxFysCFx70TtZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]