Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"taking orders from humans" wouldn't affect or anger a machine because they don't have sentience or central nervous system so they don't FEEL FULL Stop. A.I. will never do anything dangerous to humans because we can program it not to.
youtube AI Governance 2024-01-23T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXAeQ0RwOZFQK-uLx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBZFwrUkIyl4OrpMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxk5MUhBg7VMlLJCph4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQwN6AE7tj8WW5knF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw0OF-FM_ooLBjLSkR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzTofsdyQlj50q1pNF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5oXhqNSso1xfRZ5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1YRNgsRbrLMKXhG14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJ08iVmdylx7UYjAR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugy51ouKsafNGLVKqLR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"} ]