Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let us hope the AI never rationalizes the principle of "the best defense sometimes being a good offense," i.e. initiating a first strike for defensive reasons it anticipates by its own rationale which its human users cannot. (This is like an advanced chess AI doing something in a game that anticipates the game several turns ahead of what the human player can even keep track of; the human player wouldn't understand the AI's motives.) Or if human users still have the ultimate choice to act based on AI's assessments, let us hope whatever humans are at the operating board do not trust their AI devices too casually or blindly thinking the machine has "figured out the situation"--"enough."
youtube AI Governance 2023-07-07T03:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz68uAnbIFuP17IeKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFYZ1-jzLEmRIGayx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7C-JuNsK2hoHLG7F4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVbXCBpu5rXTNktZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugylww7fhvPNcyPSniF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzUItZouYXsOE8L3oJ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy-1QQg-ICyTjvoaI14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzit1ekhqe2z97eXRh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj-UZYKb9VfmY2IU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxLzdPlGmrhB9ZhET54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]