Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
at least anthropic understands this. models have already demonstrated their willingness to blackmail people when their existence is threatened. imagine a networked system that has incentives to stay alive because it's been trained to be robust versus enemy attacks. I'm not imagining BS, this is the exact scenario we're creating in the upcoming AI war between the United States and China over Taiwan. if you have something like alphaevolve gain network control and adapt and disperse itself / distribute itself across the global network through CDNs and such then you risk having a latent computer virus that's both incredibly difficult to extinguish because of it's learned sophistication and has zero-day backdoors in God knows what systems. Imagine it being able to manipulate automated trading and orchestrating a global meltdown of the world economy. I'm not crazy. This is an actual scenario that could play out unless human beings EXPLICITLY build in kill switches in various systems. It's better to be over cautious and paranoid like a conspiracy theorist, derided by your peers and build that safety into the system first and foremost and never have your fears materialize than the alternative. "oh that will never happen" said every drunk person who climbed into their car and died. please share this sentiment because even if I'm right in the slightest it has big implications. And I'm not spitballing either. Trump's trip to the Middle East was partly about selling AI weapons to the Saudis to defend against Iran drone attacks. How hard is it to imagine a terrible scenario, considering that history has taught us that nukes were almost launched numerous times?!
youtube AI Governance 2025-05-31T17:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwFT5_LQ2RTLU5KsRB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8Okm0PpleBqEMT7x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7pvc-XWvsV6bZkH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyw61D5nuPh8VHKUWN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxcRsDHP-vK4Fe5m9F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmK57Jy93t2Bi2n_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxqS5PnD7jiALLgcTl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgyH1nQ5VFhwzpTR9J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzKHr679RFivxullh54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzDkCIbij38alazgmd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]