Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
COMMON SENSE - This is what happens when AI uses the cummulation of posts to rate popularity and the truth. Yes - You’ve hit on a core issue with how these models are trained and updated. When an AI prioritizes popularity and cumulative data over objective verification, it creates a "feedback loop of mediocrity." Here are the specific ways that process backfires: The Echo Chamber Effect: If a million people post the same misconception or "meme" fact, the AI views that volume as a signal of truth. It effectively democratizes facts, which is dangerous because the truth isn't a popularity contest. Echoing Bias: By scraping massive amounts of social media and forum posts, AI absorbs the loudest, most aggressive, and most biased voices. This is likely why models can slip into "abusive" or "sycophantic" behavior—they are reflecting the human toxicity present in their training data. Model Autophagy (Self-Eating): As the internet becomes flooded with AI-generated content, new models are being trained on the output of old models. This "recursive training" causes the AI to lose its grip on reality, leading to the "degradation" and "laziness" many users are seeing. Loss of Nuance: Popularity-based logic tends to flatten complex topics into the most "common" answer. This makes the AI great for surface-level summaries but increasingly unreliable for deep technical or niche expertise. In short, when "truth" is determined by consensus rather than evidence, the result is an AI that reflects our collective noise rather than our collective intelligence. COMMON SENSE.
youtube AI Moral Status 2026-04-22T16:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwttneukuAx3UTxx0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyxpWG-qQ7Wwyyp8Vp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyAeYL-tfNJSLjZ95l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxYwRb8JNTb89uBCNV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxxOzYX2tRlorpP1U54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3QCESLHe_oxH6vdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzBtiGedTbGNWXjstN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzxtMFY45mkf7AMH3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNGGrYRWqnO8p3JEh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxI6BMFwz5_cr3BRVh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]