Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
4:24 - I work for an e-learning firm, and my VP decided to kick out all writers and researchers for ChatGPT. What resulted was more inaccurate information that I, as an editor, have to flag down and an SME still has to check. It's irritating. If this system was entirely handed over to AI, we would be publishing wildly inaccurate content. We have a food delivery service here that has entirely removed humans from the equation. The AI only tells you that your food is coming in xx minutes. There's no way to get in touch with a human. If I didn't get a delivery guy assigned for half an hour and contacted actual humans, they'd cancel the order and refund my money. Now I just have a bot telling me the order's on its way. How do I contact the company if a delivery guy drops down unconscious outside my house? I can take him to a doctor, but I can't sit with him all the time. How will I contact his family if the only support I get is "Your order will arrive in two minutes" and "I understand but I can't help you with that"?
youtube AI Responsibility 2025-10-10T03:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzpXL_DHu-27znxXjR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyhSeqx6rT3qMGGXN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyh1w_1_zyVl-Q1d2J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxqx3TKt19eTR8H8_h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyxzsEn_0DcVuM8T4d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy_SjaptTiiBPUuwQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzAS2DgJnmGZYIYmgJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwajUsn1XG8EYvgVPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwkbOlq5XYueMH3iZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwKZmMP1qdC5cu6pCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]