Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The point about why LLMs hallucinate and don't say "I don't know" at 16:50 is a bit off; it's right, but in a misleading way. The bulk of LLM training is rooted in predicting what word will appear next. If more people wrote "I don't know" online, the LLM would learn to to say "I don't know" more often. The misleading part is that it wouldn't say the phrase because it actually doesn't know, it would say the phrase because it is mimicking what it has seen. The real problem is the fact that LLMs learn mainly from next token prediction, not by interacting with the real world. When a human says I don't know, it is because they have learned from interactions with other humans that saying "I don't know" when they actually don't know is usually a better thing for them to do. As LLMs don't learn in this way, we shouldn't expect them to be able to say when they don't know something.
youtube AI Moral Status 2025-10-30T22:0… ♥ 34
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwiNphKFW9X1-QaJ-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRjAa1xY9Z5cAgqhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzHJqxEZwW92ojEIM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwKEyRf9Efg1gtDGVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzsz86Dgtuqvi6ELtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwqp2A-ZgRV4MaerRt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwJPHWUcnvJotZFqnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2h4n1cyMj8mxDYGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAJ2kAfyBWrCvGR6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgymcRj0Dpo-ThynfKx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]