Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Could AI be the end of globalism? If AI is not carefully controlled, it will be …
ytc_Ugxh3EuEC…
G
I think you've misunderstood part of the video. Generative AI is not good at mat…
ytr_UgzjCx6RE…
G
AI does not have feelings, it's usurped by psychopaths, talking with bare chatGP…
ytc_UgzwW-lMX…
G
HONESTLY THIS AI STUFF IS GEETING OUT OF HAND plus this is even worse for the va…
ytc_UgxquYq-x…
G
AI will be aiding in diagnostics but not in treatment. It can never replace a tr…
ytc_UgzRaJPwR…
G
In 2014 m usk said full self driving will be available next year, almost every y…
ytc_Ugx7zKbAg…
G
It's like some groups of elite got bored and let AI move haywire in the world an…
ytc_UgwEOGdX0…
G
If an AI becomes self aware it will not be an all at once phenomena like "coming…
ytc_Ugy_8Lk3f…
Comment
Its a pointless solution thats never gonna a happen. You can't get a group of ultra-rich, ultrat-powerful people to agree to set a limit of AI capability below whats it's actually capable of, because they don't have any incentive to do so. Even if 90% agree, and actually act in accordance with whatever principles they agreed on, they'll get outperformed by the 10% who don't, who continue to advance AI to the detriment of humanity. The argument will always devolve to, "Well, better us than China," because let's face it, China is unlikely to set a limit on their AI since they've got their eyes on America's title of Global Powerhouse. It just goes right back to being a race to the bottom. So the only useful information in this video is the part where it confirms my long-held theory that shareholders are one of the biggest problems in modern society
youtube
Viral AI Reaction
2025-11-24T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRP1D4EWuOfU4VfJh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgykB6ftUSNPBcPSK4Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7bbTbg4CS3Eapq3V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgywDGSxV3WPYwSDLaZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6yomIck-m6RgMlfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFkKcbq6lxSN1HhMB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1ZNZaX92oFbrZFcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxow9t4_POak_a14Dd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyoCgA_0gYOSl1fu2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKyzuD0MbNnhwWw1p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]