Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tell this to Max: if AI needs to kill humans to achieve ITS goals, them It Is not as smart
youtube 2026-04-12T16:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFVp-HO2KMDF7GDEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYcIjtMmIl_413bWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyP2LGmB3TAVmKdA-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQXdSC5vTP9g3Im-h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyDuB1tlzrYwpfqfUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyqbkB8XAIc2JxpsRF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYt7wE6v3gZOgr5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxE3VqHchhGeWCs8bZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgztbkmtUP3N-SjXrAl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxqE6zlUXyl5QAJx7x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]