Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Isn’t the logical extension of your argument that we should not only stop AI development in the US, but that we should also go to war with China and Russia to stop them from AI development?! You’re handwaving the arms race idea a little too easily. If AGI is as much of an existential threat as you believe it is, stopping our own AI development is no where near enough to combat this potential extinction event. China will not stop. Russia will not stop. They both believe in whatever power a won AI arms race would give. So then, you agree we should not only stop AI development in the US, but also start World War 3 to stop China and Russia from reaching AGI, right?
youtube AI Governance 2025-08-26T17:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3qTS819wIgZshvBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWwj-vUslVBQdFn354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz4anTgjdsGbSssSZJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaHX4mvwUBpBLGR8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIpceCfSqOdxPm3mx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]