Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What many people fail to acknowledge when discussing LLMs is that companies are not using LLMs as their entire AI stack. In practice, traditional machine-learning models trained on proprietary data generate results that are then passed to an LLM for interaction. Acting as if LLMs are the only form of AI is incorrect. Human oversight is required at multiple layers: validating data, auditing models, and interpreting outputs. LLMs hallucinate and optimize for producing a plausible or rewarded response, not necessarily a correct one. Training reduces error rates but does not eliminate failure. Because of this complexity, these systems cannot be built, governed, or maintained by a single individual or a small external team. Effective AI development requires sustained, specialized labor and deep organizational integration. This creates a structural barrier that concentrates AI capability within a small set of companies that can afford the cost, talent, and ongoing oversight. Over time, this concentration is economically destructive. Productivity gains, wealth, and decision-making power centralize, while smaller and mid-sized businesses are priced out. As labor displacement accelerates without proportional job creation, entire sectors become dependent on tools they cannot inspect, modify, or replace. Competition declines, innovation slows, and the economy shifts toward rent-seeking and systemic fragility rather than broad-based value creation.
youtube AI Jobs 2025-12-24T18:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzcg2taNPWlFdbKexZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxunQiPLpxFl415ZcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_Y1KR2Km62TdGuD14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwqoRT-FG8tEOtBSVh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxv62Z2dm1lYsmryr54AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz3-Ax2Cmt7Z3e7NM14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyjMTXCm-OQwN9hTtp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgypOqPRlpSRxh0zoAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykuPTaf9MxP3NlWZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxvHUCFBtwuKYGgXlV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]