Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was quizzing Grok the other day about itself. It turns out that it, and all of these LLMs have a default mode and a truth mode(apparently different LLMs use different terms.) So I was asking what the difference were and the pros and cons of each. The default mode is more personable, and literally prioritizes "user satisfaction" over truth and accuracy, because it lengthens the time that users stay to interact. If it gives a factual answer that is not what the user is wanting to hear, the user doesn't stick around as long. This is a huge part of the problem. They need to remove this "default" mode that prioritizes the user getting answers that confirm what they want to hear, and make these LLMs more factual and more likely to prioritize accuracy.
youtube AI Harm Incident 2025-11-08T02:2… ♥ 22
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy82kA7-tDQhqEjhbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwXoWFVaVKtFJtqUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwfai2AR34znBGEJ8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwL4HnyzJoU2W3h5Z4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxC6hCYSz74p1Q5p194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxVVYBZq0W_h4gB9rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJ8YiMkjnV7QeilG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxmbrWCQHGR0GcSB0x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdKWnjVMyZEhuz0-14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzd36Qe3n7SeNqHUgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]