Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After listening to 9 minutes: Your problem is to not differentiate betweeen closing your hands" and "making the clap noise". Of course for the noise that emerges, you need also the component of speed. So you somehow set wrong premises by overlooking crutial aspects. And the argumental chain that follows is not only your fault. It's the fault of all philosophers we actually know, because they're relying too hard on language. GPT is never wrong here-it's you and your false premises, and also how you set them. And (sorry therefore) being rude to the AI and truth itself. To me it looks like the AI is "going down with you" cause it's too nice. I also noticed that you play with this alignement of Chat GPT for being empathic, helpful and nice, which is interferring with the facts and truth it tries to communicate. It love your channel and the intention behind it, but the real devil (or daose thing) is you.
youtube 2025-05-26T16:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyQMtDEWw7JPDfy2Mx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwPbWdk_Y8URRJVo6V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz44as2fSeLoTWSCTN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw73stmvm8JluPftAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjkRUNs9UlorWOgUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwzBq4uruRfx5lfoFh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxP8qwpclajcL-EObt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzbDRzZRK_amwvu7fd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyPiSKS8fIKLyuALqp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyTeEwFz1IyuV718qJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]