Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
art is creation. ai is asking a robot to simulate it. We will only go in circles…
ytr_UgyMkQv2s…
G
It is not real life, because if you replace people with AI, they will not have $…
ytc_UgzZesioO…
G
The military has ruled the country for years. Now they have some retard as a kin…
rdc_dy8781y
G
At what point do we just abandon the current AI addicted market and build someth…
ytc_UgxaZXXo3…
G
I would read that commenters books Ngl.
His arguments are.. pretty valid tbh. Th…
ytc_Ugxi86PDL…
G
Driverless trucks may be safer than some of the illegals we have behind the whee…
ytc_UgwKrtGPD…
G
Valid criticism, AI is far too dangerous tool to use for private corporate inter…
ytc_Ugx0bxLgC…
G
As a licensed therapist, DO NOT use Ai for therapy. 988 is the national free hot…
ytc_Ugw3xqoj7…
Comment
What is dangerous is not the AI itself, but the humans who use it. This is the component with the greatest potential to push human civilization toward the Great Filter—not nuclear weapons. The two differ in intention. Nuclear weapons carry an inherent reluctance to be used. AI, on the other hand, is a tool that powerfully accelerates human desires into reality—things people genuinely want to achieve.
Unlike nuclear weapons, AI can be satisfying to use, encouraging continuous development: can it solve this problem, or that one? This drive pushes its capabilities far beyond human organic computation. If the humans using it have high mental decoherence, then the direction of the AI they develop will reflect that—only with exponentially accelerating consequences.
youtube
2026-04-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwOAErA6RkQ6KotPFl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzVP1MS7UdCcuoMnkh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzF1SngDOU2jNhumx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz2ZBAHT3yo3KLqnIt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfFcs2SGNan4Jpre94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGLN81XtGazYqjlP14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNrtmUK6aFScj6R1l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwKmYeB05ONWValBzh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx-MY1D2Ad5m5CkDCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2e2ZF09xUzR8Wn5B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]