Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Invest in diplomacy, understandiing other people and fiindiing ways to get along, coexist, tolerate, share food, improve human relations, be less territorial. We huumans shouldd be distrustful of systems which have repeatedly demonstrated faiiledd ethiciical reasoning. The dilemma is that AI response times in weapons scenarios are faster than human times, and even if the USA prioritiized human deciisiion making other natiions might not. Better solutions are to disincentivize human conflict. What are peoplle fightiing over anyway? Food, resources for food, women, religion, water, land, money and mining. Holly land? Religiious based reasoniing iis unreasonable. Competion w China should be peaceful, streightforward, and they shoulld stop being duplicitous. It's really better to invest in diiplomatic and social solutions than think a war of ballistic or missiles oor bombs is winnable. The weapons and redduundancy were desiignedd for MAD. Mutually Assured Destruction. It is not winnable. Have a different strategy and priorities for humanity and Earth. Play a more peaceful game.
youtube AI Governance 2026-03-23T17:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyyRaGlIHpi4uNgWWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAUri5ge2gEHPWRRJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyK7sTIDr_RlGCeMWx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeyTyI8KE-Q-QOo8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZI4HkAEnkL0pLLtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQZWOVRIyYnSeMK6x4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpgCcp6EEjK5Cc_4t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxzeWASYMaPTkN3IG14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw6IcHEbBBhdKVPJg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwK6aIREv4mVPwt56p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]