Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
7:39 when I see this host’s face talking, I know it’s a quality content from somewhat of an investment. You’re a professional and you bring reputability to the content by putting your face on it. Plenty of people can make a good interview script and adapt AI to conduct the interview dynamically. I think that when “what makes X good” involves value judgements from people, X will be better with human involvement. Example: what makes a calculator good? Getting the right answer, being usable, and reliable. No one is choosing a calculator based on an individual person associated with it. What makes a YouTube video good? Well, in some ways, it’s good if consumers click the video. Consumers click the video for a lot of reasons, and some of those reasons are on the basis of the individual humans associated with the video. Thus, humans are a value add to YouTube videos. So, even if a YouTube video features an AI interviewer who does an objectively better job conducting the interview, people are not good at evaluating that kind of value. So they might assign a lower value to that product than they should. So I guess, when human flaws are involved in the value judgement, human involvement in the product to appeal to those flawed human judgements will improve the product’s performance insofar as the flawed human judgements are considered a relevant metric of performance.
youtube Cross-Cultural 2025-09-30T17:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxCh3frawc0Z630j6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTi5wIXN7KKF3us414AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6svq91zPlIRKI6N94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwVgu1pdNmWsjtbjgV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyuO2Ym4Z1Cf2tfH3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwwD0zLVE_EI4zkw8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzq-x_ugwEIaD8vFhR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwGsSl7sMotWUjcxkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyIs5SQ5Pe2A40MEAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7WAHDLiCApDb_pkN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]