Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
22:00 IT's A LIE! You are being LIED to so that these people can be given BUCKETLOADS of money to continue doing what they're doing! The conversation about AI replacing the workforce is FAKE! It is PURE PROPAGANDA launched by the AI companies to build hype through fascination and fear for their products! Please STOP validating their bullshit by engaging in doomer conversations about AI. The reality is LLMs are FUNDAMENTALLY FLAWED as a concept to where they will NEVER be reliable enough for heavy duty production work (hallucinations are a feature of the design, NOT a bug). Contrary to what the AI CEOs are spouting, scaling the hardware will not result in even linear, let alone exponential growth of the models because there's only so much unique text you can feed to them. For that same reason using AI to generate text to train larger models will ALSO spectacularly fail. Finally, the cost of running these models is orders of magnitude higher than what people currently pay through their subscriptions which is why either the prices will go up by those orders of magnitude (some studies estimate that at even 100x the subscription prices these companies wouldn't be able to cover costs) or the products will be rate-limited to death to where they're borderline unusable, PARTICULARLY for regular, heavy workflows. TLDR AI isn't coming for your jobs in the near future, the AI bubble will burst and most likely neither OpenAI nor Anthropic or even Oracle (and other AI-chain companies) will be left standing.
youtube AI Governance 2026-04-25T13:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxdL5IMCfCJA_OODel4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwegWZZpv6JAmawTTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgztpPRsgAy4iQ99Hwl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwff4_dZng118XyJYV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzlRVcOlyRIU9fN48l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy54eSKcm05OJwa89l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8NgkXdUMMOZFkGu54AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCWoUz1MdrfeIIiwZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdVdNSI6hf_bvIxVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwBfnFn30bVFvfZBqJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]