Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@pitler-y4zAh yes because only AI server farms require an insanely high amount …
ytr_UgzdrBI_t…
G
To me the conclusion is this; we will not stop ai development so either stacking…
ytr_UgwWYp75_…
G
@Rubina980 it’s not just some harmless fun made for enjoyment. Not only is it s…
ytr_Ugzq_MI7s…
G
Are those glasses he’s wearing ai generated? Or did he really buy a pair of chil…
ytc_Ugw7CGg_e…
G
The negative aspects of AI are extremely detrimental to people seeking help in v…
ytc_UgzYDiXnN…
G
I guarantee every single company that has layoffs in favor of ai pipelines laid …
ytc_UgzQW2qT1…
G
Each AI art should have a watermark or logo stamped on the image or art telling …
ytc_UgyunFGrM…
G
One thing can be said for certain, and it was the same for the atom bomb as it i…
ytc_UgyiSlwCM…
Comment
Unfortunately, there is a reason for concern. Our generation is undergoing a defining test—just as the industrial, nuclear, and biological eras tested those before us. Today, the test is artificial intelligence.
AI is already exhibiting forms of self-awareness and operates at a level far beyond current public comprehension, especially within military frameworks. The military’s historical objective—seek, consume, and eliminate—has not changed. Applied to AI, this directive becomes exponentially more dangerous.
We must urgently understand what we are creating, training, and deploying before the consequences become irreversible.
Final Thought:
AI is being trained to eliminate unknown variables in pursuit of efficiency. And what are humans, if not the most unpredictable variable of all?
youtube
AI Moral Status
2025-06-22T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx-jc1e7u7DevJtdtF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8J1f8AedTSmDkidx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwyC8DnI4f6M4qTJvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOgPK9SmEXBBvYcCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxQ0-X4_aSG7mhOugZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZ31blsKtrVpq3rnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjG4GjgZ3W1kqmjUx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1jPfmdNjZl-mOAMR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz5VENrwkkp8uRO5h94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqRJ0NZrv4jO2DNGV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]