Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems to me like they want to make AI indistinguishable to real footage as fast as possible. But they can't achieve that so fast through technology. They know a lot of people reject AI content, so they do that to make us confused, because people will start questioning if videos are AI generated, but then we will get answers from YouTube creators saying that the videos are real. Slowly people will stop questioning and then the barrier between AI generated and real content will be "gone". I'm 100% sure this is what is going on and this is not only absurd, but it is also extremely dangerous. There should be a law that obligates content creators to let us know if a content is real or AI generated. And this law should be applied worldwide, we have the right to be told what is real and what is not and to choose what we consume. Thanks for bringing that up, Rhett.
youtube 2025-08-30T16:4… ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx3b6z-us-PxyAdMip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzpYXE0VIzeH_7UtiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5-5LknWwHZ8suuXp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzZ_MH5TrQjQNCJh1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxTAu8jNInqVVKDEFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]