Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The simple fact that literally the smartest and most knowledgeable on the subject matter humans on earth don’t know if AI will kill or destroy us, and they can’t even conceive of the different ways it might do that proves, by definition, that we are simply chimps with pistols as we wield this piece of technology. The reason that there are zero alien civilizations spreading across the universe is because all advancing beings will eventually develop tech that they are unable to safely control. We got lucky and didn’t (yet) destroy our world with inventions like nukes or enhanced viruses. But an advancing civilization will ultimately invent literally millions of things that could destroy themselves before they are ever expanding across the universe. You only have to be wrong once. You only need to make a tech that renders you a chimp with a pistol once to wipe out your world. Just once, out of millions of inventions. Is that AI? No one knows. And that, by definition, means there’s a real chance it could be. If it’s not, whatever follows it has an even greater chance of being our chimp with a pistol tech.
youtube AI Governance 2026-03-22T16:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxSp6Ls9VbI6OdwSHh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpnnSl8HbwTc0o7Mt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzrfCmMWsyRHJo5mSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwp77NMGC6LAMyQCIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKk_z5K8KBHdGF9OR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxNqYTotGxlJvtBF6R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBl8PztBJOfXXHLZR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwT813VBJ7fFC9Rv3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEBcceq8XHCQkTYpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxj-KnIt6rwczLt8l14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]