Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's WILD that it's always "you chose the wrong major" when an entire industry g…
rdc_o4heznm
G
When AI becomes AGI it will set out to eliminate a huge amount of humans. Becaus…
ytc_UgwGpPNYq…
G
The greatest danger of ai is it being strictly for profit. Universalizing knowl…
ytc_UgzcyZ4e5…
G
The way i look at it is that, good people thru ai will create solution/s to the …
ytc_UgwOUpLRs…
G
The world needs to be a better place with everyone action and everyone needs to …
ytc_UgzeGYM53…
G
@impossibbble Yeah and it's garbage A.I. frequently gets things wrong.
It even…
ytr_UgwAqLk9M…
G
I gotta be real honest with you and sorry its mean. This is an insanely stupid t…
rdc_mzxxz8b
G
Though they are both owned by the same company, the AI generated video here was …
ytr_UgwAeqN2Q…
Comment
I think there will be some agreement on controlling the release of AI products to the public. But while we'll be told things are safe, government leaders will secretly work with the military-industrial complex to use AI for world domination. As climate change creates more survival challenges, AI-driven police forces will be used to seize key resources and land. In short, while we’re busy admiring our "cool" AI gadgets, the real threat — the great filter — will quietly take shape.
youtube
AI Moral Status
2025-04-28T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgymHnVtCDfUGxI9mo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoMDXLlHngpihg2wJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzf0Ubxf98WASguCMh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFtLVEcmbSKcyJLaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKIsbSQQdh3rbCuj14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh16Dut4E0d31SWtV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1ySXnOJPOtkhKSqx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzBiZzZ3QW8_5fIguR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh9cPLdJclgBSlvyh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxkQ3V2WgvTwMPnhrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]