Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the decision to develop AI despite it risks is actually rational in the lens of game theory, whether you are talking about individual companies or countries. The matrix is like this: if you don’t develop AI but your competitor does, they may destroy humanity, but they may not and now they have an advantage. If you also develop AI you still risk destroying humanity, but if you don’t you will have an advantage. It is a nash equilibrium (reference the prisoners dilemma). It may be the case that cooperating and agreeing to stop developing AI may be better for humanity overall, but with no incentive to cooperate both parties choose the option that makes them both worse off.
youtube AI Governance 2025-10-02T18:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxaWkXloG_20dh-U6N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw0PlQ4ulaNSie6PTV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyaJD7oZKmDGsB1Kj54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoRObDLAT6XuFUWiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzm__NW_a-VWg5mfQN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]