Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You have to try to embrace the idea of a "Bad Actor." Bad Actors are out there. Bad Actors are NOT regulated. Bad Actors are well funded. Bad Actors are ALREADY working on your worst nightmare of what an AI can be. It has been discussed that at some point the only thing that can save us from an "Evil AI" is a "Benevolent AI". If this is true and this situation plays out, let's talk about the scenario: let's say, to be an efficient and optimized AI ready for battle, it will need to execute 10 areas of code. What will happen is that the Evil AI will only have to execute the 10 areas, but the Benevolent AI will need to execute those 10 PLUS an 11th that will make sure it's actions are REGULATED PROPERLY. The Benevolent AI will be just that much SLOWER than the Evil AI and LOSE. You can't not participate - because it's coming no matter what. Let's be real about this, we're going to need to build an AI that values human life at its core, but it will also need to be set free to do whatever it will need to do to win. We are currently building our future master, we'll have to decide what kind of master we're willing to tolerate.
youtube AI Governance 2025-12-07T09:1… ♥ 10
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]