Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How is this even considered a “debate”? The central issue at hand (AI posing an…
ytc_UgwHGK7BE…
G
But AI answers on Google are WRONG, or misread & answer the wrong question, or …
ytc_UgxGgpW7f…
G
I endorse this usage of AI. You just better hope it doesn’t become sentient beca…
ytc_UgyBbo692…
G
I think the interesting thing to note here is, that without power - AI dies. At …
ytc_UgzLkJrOM…
G
And you know what the best thing is ? The Ai artist can't even claim intellectua…
ytc_Ugytge2Ay…
G
also i think that both sides are overlooking something really important,the fa…
ytc_UgzplYoXH…
G
More and more people are starting to realise AI is not actually Artificial Intel…
ytc_Ugw79fdiJ…
G
All I'm hearing have ai fail and we win cuz f them if they win only. Let us all …
ytc_UgzHAAOCa…
Comment
The screaming elephant in every room is (and always was) the "coordination trap", or "Moloch", that must be solved BEFORE we summon the AI god. The risk of unaligned humans misusing powerful AI is too big, considering the whole human history. Let's remind ourselves that the father of "MechaHitler" is the richest fascist in the world, currently building an army of robots. The rest is just basic maths.
youtube
AI Governance
2025-11-30T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxzw5aj88K2jZv9jyx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJP6SM9JMD-Zj8tA54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnEis_M_Ia7oftq6d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqJYFXCdHOesDgDpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv7qvQZe3kmIPrZx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXmkoFzaKAshmKOMh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXlYKTneYhHb2qp_94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaTVCfVsfBRpiwszZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuVu36YHFOTonadH54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDD1zbokw2qoM0SKF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]