Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think what we should do, is hold AI models accountable for the things they do. Just like we do for people by the laws. If they understand it, they should follow the law. Then we can adapt the laws based on the needs just like we do it now.
youtube AI Governance 2025-06-16T20:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxu5cgeKTwiFvoiIid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwQMDdo2Y1bSWwcMf14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyuOO8W4o0zo4HVl5d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwcn1RioXXbPzz5WQl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxClSqbTpXXqQayraF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8WdkYtOlpVrjn0z14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyof583VfjUqjiJzX14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyZUgj9ncSAJqd9cFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy8usv7MvM405R9UOx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxBZUP2p_4HTcWH9Bd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"} ]