Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the problem is less that viewers might distrust the content of this or that video - or even this or that channel - and much more that YT is training viewers not to be able to tell the difference between real and deep fake. If real videos are processed to look like deep fakes, how is a viewer supposed to tell the difference? And if viewers en masse are shown fake-ified content as some constant percentage of the content they view, they will lose the ability to see the difference between fake and fake-ified. There are any number of powerful interests that serves, and none of them are benevolent.
youtube 2025-08-31T05:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw9DmVIo7TH8W-MZWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwxXSaeuLeg3nBcKll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKe63c3QyojEahc2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyV-0Nfe5CrBkyd9PB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzSE6Iz2llfbZ_qd8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]