Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you make a product of higher quality - AI won't affect your life at all. But …
ytc_UgwtGr4J_…
G
I just knew it would be a matter of time. You ARE the responsable driver, theres…
ytc_Ugy9sM_k1…
G
They say "ai will remplace artist" but bro, without artist, without data base, n…
ytc_UgztaX7gJ…
G
From a legal standpoint, the question is: Do we treat the AI model like a person…
ytc_Ugytlamc0…
G
I see parallels between AI and firearms. The 2nd amendment enshrines citizens wi…
ytc_Ugz8OidYH…
G
I think this study was written by people living in the past. Good or bad, the ol…
ytc_Ugw1SXgzi…
G
Looks great, except girls should not be in school. Period. Only pseudo-conservat…
ytc_UgxYvcWBK…
G
Test one of these driverless trucks over the rocky mountains during the winter. …
ytc_UgzasvS68…
Comment
Consider this: if most companies utilize AI, it could potentially eliminate around 60% of the workforce. That 60% will not be able to find a job, so what happens to the companies profits when there are 60% fewer customers now available! Not to mention the social problems this could create. Make the math work for me, I don't see it. If you receive a dividend, as he says, then you become a subclass, as you will receive a minuscule part, since it has to be shared among 60% of the people who are currently unemployed; then again, you arrive at the same conclusion with the same problems. I believe this could transform the world into a "socialist" global society, where the ruling part is the company that manages the language models, and the rest receive a basic income; however, it will require something of you to obtain that basic income. Make this make sense? I could be 100% wrong, but it certainly appears that way, and I work in the tech ecosystem.
youtube
AI Moral Status
2025-07-25T19:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbzFzUcLviNTFmK3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2vLP3y4OOoOaNPah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"skepticism"},
{"id":"ytc_UgxhoR5UHK6THIMTaSF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZhKhJ4iVVt2hAfQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxgQLOVOtt98fyt7lR4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwWxPJKhJVI0MzAic94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzDbssCuyZV4i4D89N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzD6hIZA8zXmdwa9gN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxWmftk2HugRn7na3t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGm-uZtJ_u4Pe3TGp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]