Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I never considered there was a difference between art and entertainment. But you…
ytc_UgwMJSuN7…
G
ai is just a thing that predicts words and dose stuff its trained off so just qu…
ytc_UgzwEWaZx…
G
After seeing the far left infiltrate education, test scores are in the dumps.
I…
ytc_UgzkKHbbf…
G
Respectfully, the AI doesn't do anything they're not already doing (analysing wh…
rdc_n7tg2hk
G
Eventually UBI will be needed. People will live in pods, hooked up to machines t…
ytc_UgxDxzD8z…
G
The main AI will just learn to trick the checker AI (disguise its nasty action a…
ytr_UgwYo-cob…
G
AI would be nice for people could list their office and their location and emplo…
ytc_UgzCPCyhD…
G
i have a hard time believing no one stopped for the little black girl. Especiall…
ytc_Ugw3SAOB-…
Comment
What Yampolskiy is doing here isn’t new — it’s part of a very old media pattern. Fear-based narratives always generate reach, and platforms reward exactly this dynamic. The more extreme the prediction, the higher the engagement. It’s not about truth; it’s about amplification.
Interviews with catastrophists almost always go viral because fear is a reliable attention engine. Hosts benefit from boosted view counts, guests benefit from publicity, and the entire system rewards emotional extremity over balanced analysis. It’s not a conspiracy — it’s just how the attention economy works.
This is why these conversations often sound more like metaphysics dressed as engineering. The emotional certainty, the doomsday tone, the “I know something others don’t” framing — these are classic storytelling techniques for capturing attention, not scientific argumentation. The risks of AI are real, but turning speculative scenarios into guaranteed disasters is a business model, not a research methodology.
youtube
AI Governance
2025-11-27T11:4…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy56i3yCvIS1H-iQeF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1tWrtiZh4KkuVGtZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzEOJkDfdoFKcdAyGx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfqBlH55FsMXp08YN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZWrR4HhsShzB4iFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxZ0lKO7zFNfw7ArY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybKGti21A2jgR423R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5AFYoHo6He3wwdlZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuAEPdg1iTmQYX45p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQyI4PKDnhc11i59N4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"indifference"}
]