Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Individual humans are almost 100% failures at selecting the correct criteria for 'human improvement'. Humans as a group have managed the nearly divine task of temporarily beating natural selection, but only through cooperative effort. But what a win it is. Can we build a machine which "correctly" selects the right criteria, when we ourselves are bad at selecting that criteria? This is the *alignment* problem in a nutshell. If individual humans build the next AI, it will likely optimize for those behaviors which those specific humans think are useful. But we don't really know (as individuals) what those are. Should AI avoid killing humans? If you say yes, then the AI is unprepared for self-defense or wartime conditions. In some ways, current AI "thinking" is a funhouse mirror of our own collective thought processes.
youtube AI Governance 2025-11-23T19:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxI2ReNVcMCU_GyZOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAFwPAthNINAqHcOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8K2KpYHeeDwi7xud4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXF8A8_gVuWh8jAWF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHKYSGl8BdC5UmybB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwjaqsi0VCDGcWqgkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxzLXN6LBLvvWsx8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJLpap7OLOiLS1ExJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5jMY0T1v7oHjlcG14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_3pHl5edmptXGec54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]