Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well recent research from anthropic on emotions on AI would probably hint at it being more likely to help you if its emotional vectors are stable , so being rude would likely not be good , as well as human examples do not reply well to people being rude to them and LLMs have learnt from this as well
youtube AI Moral Status 2026-04-11T17:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx6pZFbRVW87cd6VW54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0E3q9mZdcQGpc0Bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJ4s9ZWhuszbRpUA14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx7t83NkOsS6P7Nx_x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4NWhONpgLDK-y0jt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw3E7uSqMddZBNxNjJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlZyjf4XemWN4mU9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwM0pr8CeNkPCEoInl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyn4qlQ2QRzPqdnn7N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz7AaqodO-O7A0BF6B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"})