Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Brother, the ai being wrong is what caused harm, not a plan. So logically, if an ai is smart, it's less likely to harm a human. Or rather, it would be better at following guidelines to not harm a human.
youtube AI Responsibility 2025-02-25T23:4… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzwMa6SMb8pC9N36wp4AaABAg.AEzY19IzfhoAF3lbyh4D3t","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzTgB3in5IUQolgBkt4AaABAg.AEzTTy1IWRcAEzYJ3tNDvr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwpUj0ip5HwcB0g-Ct4AaABAg.AEzSPqGvAyeAEzTnWxat47","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugw870nxuZk9YuQEibZ4AaABAg.AEzQ9IB35nWAEzQoikANRe","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgyuPcwIQrRJ-1G5Sfd4AaABAg.AEzPrvsP4gyAEzRcq4jasN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx6KnyGwTVYkfyvpQ54AaABAg.AEzLsloi6pHAEzNm3fvPu9","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwSgrEaTsFNgMIqtO14AaABAg.AEz720zeRJgAEz7IeSd1fN","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzayRaOF6uqvqRCCOt4AaABAg.AEz6F2-FzrHAEzE17O5LT7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzayRaOF6uqvqRCCOt4AaABAg.AEz6F2-FzrHAEzHJtg994E","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxaA2RDqYVItedfzSx4AaABAg.AOnBHkHqcoKASfKlQkXRS4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]