Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Avenger222 artists are paid to teach all the time. That’s how a lot of artist m…
ytr_UgymmmcQ6…
G
AI presents significant risks to society—including job displacement, entrenched …
ytc_UgzIYt21G…
G
I am definitely no expert, and dont pretend to understand modern technology, b…
ytc_UgwmP9bBK…
G
Or Disney points to this lawsuit with Midjourney to get a licence sale to all th…
ytc_Ugw_2YD1Z…
G
All for realness l.l.m. or as perceived large language models are passed off as …
ytc_Ugwwe9Y68…
G
I'd actually love if AI would purposefully give wrong answers if it leads to a r…
ytc_Ugx0eEqEU…
G
Whomever. Controls the new energy supply will have the ability to limit the exp…
ytc_UgzSiAycV…
G
I am proud to say, I'm not afraid of ai stealing my art.
It'll fuck with it as…
ytc_UgwSHBHuI…
Comment
The video AI2027: Is this how AI might destroy humanity? presents a thought-provoking look at potential risks emerging from rapid advances in artificial intelligence. It explores research suggesting that if AI becomes autonomous without proper safeguards, it could pose existential threats to humans. The narration is clear and engaging, backed by expert opinions that balance optimism with caution. Visually, the video uses concise explanations and credible sources, making complex ideas accessible even to non-experts. Overall, it’s a compelling and educational piece that encourages viewers to think seriously about the future of AI and humanity’s role in guiding it.
youtube
AI Governance
2026-01-31T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwmCgtPNSAQkibV3UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwTcrWCUAefBf5Eb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPVZag2Re52ZAzTsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLEa5RgXgpXVHyccR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9C82t0dsEa8eomwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwNL7F5yUmtxmtYbsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8CzyqM7ADzYZWX494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwnd_UxLTMbkGTWZ2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCuyot3wO55C_LSuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzn3FJryeK4OWKliW94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}
]