Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Doom will never happen because, in the process of developing AI, we optimize for human values. At any slight AI misstep, we get annoyed and re-prompt or retrain the model. If any human+AI system starts to violate people’s rights, other humans+AI will seek to punish it. It will be like evolution has always been. Just faster.
youtube AI Governance 2025-09-04T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw0L9Tc92xQ2-hP9aF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQB441y53eJXu5bKR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyawh26VC9yzHY1hnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyuD9cHet0GPma_PSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw--Ff8vFK_OYUHBBF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3tymoIBU_uUMxuYZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyS8Az_AQS1D5I9lDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjGkak5lrcWXk8YX94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZ-sZE8mezkkZHtVR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzvIrXfvoRFJMJLHVh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]