Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Today's businesses and capitalists are now so aggressive in replacing human teams with agentic AI systems. They believe they're smart enough to pick things up quickly and earn more money by increasing the adoption of AI in business. They never learn a lesson that nothing is perfect. When you 100% trust AI, all of a sudden, it'll give you a surprise. However, no one can tell whether it's good or not. The problem is who's gonna pay the price when AI makes mistakes if businesses mostly depend on AI. AI agent service providers? LLM owners or developers? Or businesses themselves? Here's the thing to think about. The injured parties want AI service providers to be liable; they would basically have to prove that the mistake was caused by a verifiable defect in the AI software or systems, rather than a failure in the business's own process, data, or configuration. However, if you don't have a solid team to back you up, I'm not even sure how you're gonna prove this. Just write an email, or submit a ticket to complain about that? 😂 The truth is, even if AI is getting stronger and more capable, you need to have solid human teams to help you monitor and supervise AI's decisions and actions, moreover, intervene when needed. Otherwise, it's like gambling to run a business when there's no plan B or failover mechanisms
youtube AI Jobs 2025-09-24T17:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwRcCiDeIXdq0S6xpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfDTBsWsGv4P0v5CB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxByXu3Tq1SNVdnRal4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwgv8tCRy-96M3pq1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzq6G5Og3h21DrSl9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx96PZoPDkW01t4NgF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzOOmU9v2GCFFp7deZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvhXlHIEt_fjRFjUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTAdx9iXjYXwInS394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0RpMmxvb-3DM6cOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]