Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
16:09 not exactly. That's also part of it, but the important point is that saying "I don't know" isn't a "good answer" that's useful to us as users. The LLM does not know if it knows things. That is the problem. So it has no way to say internally, ah I'm not confident here so I should be less overt in phrasing, or vice versa, ah there's tons of evidence here so I'll be very clear. It just roleplays as though it always has the answer, because a machine that has all the answers, IS WHAT WE'RE ASKING IT TO BE. If you try to train an LLM to be less certain, it's just going to go "uhh idk" for EVERY ANSWER, because that's the only answer that would be true. And early versions of GPT, particularly 3.5, basically did that. "As an AI language model..." Otherwise, it will be "uncertain" when the common presentation of an issue is one of uncertainty, or when the previous context inspires it to think that phrasing is likely. And it will do that even when the answer is super obvious and it definitely has that answer in the training set. It has no way to tell itself when to do it one way or the other. It only knows that we want it to sound like it's saying things that are true.
youtube AI Moral Status 2026-01-08T16:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]