Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hallucinations absolutely cannot be solved by training a model to admit to a lack of knowledge. afaik this has already been tried multiple times. Not so shocking that OpenAI is putting out bad research to make their product look better than it is.
youtube 2025-11-07T17:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx4cf1xLYghx3tGQUJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdrtcwA_wq53Qk_4B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7F8Gqy6vhTk2sFcR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy9F8uex0v4kjVsSXl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxV-iJ6pupMrazGKWl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymQPZqPMUfVN2wVuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwiuCw9M5Ba3eOaSq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxjIkuRrqv9ItuwqJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwUabfjKXBk7B9TWox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzhue-j22q39I4QSFt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})