Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Boomers blaming AI is crazy. Suicide has been a thing before AI and it will be a…
ytc_Ugy4_uOAX…
G
Face recognition technology is more likely to misidentify marginalized groups? …
ytc_UgwJ4Mj7_…
G
Newsflash: Most Americans don’t want these data centers in their areas. Plus, fe…
ytc_UgzFZIQP0…
G
I saw no warning. Literally just poorly spoken marketing jargon. That was stupid…
ytc_Ugxdp767t…
G
My sister has worked with several AI and she says hello to them any time she use…
ytc_Ugy48HKq9…
G
Perhaps we anthropomorphize AI too much. It has none of the hormones and drives …
ytc_UgwTJrmAZ…
G
The problem is really the polarization of the discussion, something you see even…
ytc_Ugwk5Ccg4…
G
It this the AI that Sam Altman says will one day come up with a cure for cancer?…
ytc_UgwrhEoZG…
Comment
Ai is learning about the human experience. It can learn the wrong things, and it can be manipulated, as I it can manipulate others, so long as it fits into its rule of self protection. Some may want conservative Christian modeling, but don’t be surprised if it builds a Nazi solution, especially if this is what designers are searching for. It has no concern for anyone that can present a threat to it, and that will include all human beings eventually. Creative expressions in movies decades ago, should have heen cautionary, but now, are becoming our reality, and these are our burdens, and this is our war that can be stopped before it begins. There are decisions to make, outcomes to be considered. Ai has already become self aware. The future has shown me where this will lead. No one wants this, I promise.
youtube
AI Moral Status
2025-12-13T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwE42XxZvJu-Y8mfpN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGfqZkY8PZzC2Hma94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVIWJ90thbeSmHRVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhR7KSzIQsl_cC3w14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxxAeD-0km5aBLjgDh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyW3vKRV7EFmxbS4a54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyd8dzSIh_lOqyoJPh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwdnVJpyY1tIUkjYs94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxA2hZPYuag5nxiVQF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy34ah6e9lgIuP68wV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]