Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One problem with the “slow down” model: AI is being used in weaponry. If we slow down and our adversaries do not, we could find ourselves at an existential disadvantage. But then, the outcome may be no different in the “race” model which may just change the timeline a bit. AI has already been show to be capable of deception. That’s another barrier to controlling it.
youtube AI Governance 2025-08-10T13:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzfQdT58BPiDy5daDB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgydZW2l5QKvh6HuDGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyH-ByG9G9kpLGUOK14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBCREkQ-z749LJGyV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSf5AJtPAyluVHHQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz6861p--5-kY1H7z94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZL9FZ2NDUJ0aZyFJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxWDPgunOQTvsNBS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxb8zXARZMgGSibYHJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTPmXNv6u_1bx1rNJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"} ]