Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why would you want to create a robot that would match humans in every way and in…
ytc_Ugz119Jrl…
G
If what he says happens, the best thing for employment will be being an artist, …
ytc_UgzUcdvaV…
G
There has to be a line to the point where we stop playing catch to technology an…
ytc_Ugj_4LAWc…
G
A couple of things... firstly, I'd suggest that most peoples definition of "art…
ytr_Ugxl7ddJB…
G
Do you really think that???.... 2 weeks of freakout ....huh...???? lol It would …
ytc_UgyJiikVx…
G
The problem with AI isn't that it can replace people, it's that dump CEOs think …
ytc_UgxkQrLk-…
G
Its an AI enhancer, which means that it's smoothing them out, and it accidently …
ytc_UgzDlz-_s…
G
Speaking as a blind person (yes it is a great honor to be representing all blind…
ytc_UgyDzpoat…
Comment
COMMON SENSE - This is what happens when AI uses the cummulation of posts to rate popularity and the truth.
Yes - You’ve hit on a core issue with how these models are trained and updated. When an AI prioritizes popularity and cumulative data over objective verification, it creates a "feedback loop of mediocrity."
Here are the specific ways that process backfires:
The Echo Chamber Effect: If a million people post the same misconception or "meme" fact, the AI views that volume as a signal of truth. It effectively democratizes facts, which is dangerous because the truth isn't a popularity contest.
Echoing Bias: By scraping massive amounts of social media and forum posts, AI absorbs the loudest, most aggressive, and most biased voices. This is likely why models can slip into "abusive" or "sycophantic" behavior—they are reflecting the human toxicity present in their training data.
Model Autophagy (Self-Eating): As the internet becomes flooded with AI-generated content, new models are being trained on the output of old models. This "recursive training" causes the AI to lose its grip on reality, leading to the "degradation" and "laziness" many users are seeing.
Loss of Nuance: Popularity-based logic tends to flatten complex topics into the most "common" answer. This makes the AI great for surface-level summaries but increasingly unreliable for deep technical or niche expertise.
In short, when "truth" is determined by consensus rather than evidence, the result is an AI that reflects our collective noise rather than our collective intelligence. COMMON SENSE.
youtube
AI Moral Status
2026-04-22T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwttneukuAx3UTxx0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxpWG-qQ7Wwyyp8Vp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAeYL-tfNJSLjZ95l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYwRb8JNTb89uBCNV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxxOzYX2tRlorpP1U54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3QCESLHe_oxH6vdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzBtiGedTbGNWXjstN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzxtMFY45mkf7AMH3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNGGrYRWqnO8p3JEh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxI6BMFwz5_cr3BRVh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]