Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They are doing it so they save bandwidth. It's just like advanced video encoding. But it's more obvious, at the moment. The better your device's ability to AI enhance, the better it looks. Eg, a phone has lower enhancement than a PC, with a good GPU. Three things are being done. Lower frame rates with "diff hints", that fill-in missing frames. (That causes tween-frame blur and uncontrolled contrast bias.) Then there is also pixel-halving, which shares colors between pixels in key-frames. (That can blur key-frames) The third is basic DVD style "hold and slide" compression. That is where something like your eye or ear, doesn't change much, between frames. The original "cropping" of your eye, will just slide around the screen, until there is enough change to force a redraw of the eye. (In highly compressed video, you can see this more easily. Especially when you corrupt a following key frame. The items being dragged around will be clear and detailed, but the new data will be static and garbled.) Give it a month or two, and it'll get better.
youtube 2025-09-11T15:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwANCYwvGRuwfGFEAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYjgJJa_T_IGy0T-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxckssWYzjXgeOA8ad4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxbdYiP8HBi2QpM1D54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwckSC0dkjekpv63j14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]