Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just a more lazy and substituting way to make art just by clearly not doin…
ytc_Ugydvqi2T…
G
>We can still morally eat animals but we are obligated to insure that their l…
rdc_h6u4fqc
G
Navid talks about the stability of artificial intelligence and the potential to …
ytc_UgwMhwgSl…
G
The creator is kinda crazy. I mean like, why would you create a human robot? Lik…
ytc_UgxdTI-yC…
G
This makes all forms of surveillance extremely important but mostly in the choic…
ytc_UgxeKYNDt…
G
We dont need a godfather, Elon, or others to know. We know but we can't stop it…
ytc_UgxGn-dCx…
G
LLMs are not a dead end. They are by their nature "avatars of human understandi…
ytc_UgyF0waFe…
G
12 years after giving the financial industry bailouts the Dow Jones hit 31000 du…
rdc_gkquodr
Comment
The biggest problem with AI is that it is so human, in the sense that it is exposed to, and fed with, all the misconceptions and self-destructive biases of today's public discourse. The mental content of those in charge of its development and advertising, is neither credible, nor reliable, nor promising.
With this AI you are giving enormous power to deeply wrong and destructive ideas and notions; which are the same ones that have led you to have the presidents that you have, the borders that you have, the addictions that you have, the media that you have, the deficit and the debt that you have, and the wars that you have. AI is not going to be the exception.
The worst of all is that you are collectivist and identity-based, and your AI is too. That's because your system of thought is nothing more than an abuse of language, and that is how it spreads.
youtube
AI Moral Status
2023-12-27T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0DkEDnbCvtSND8594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOoh9Pz8T3oY-CxY14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHKHdZArcBh6mQfs94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQVlbZ2lHkSmGICkx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxS074G6MdpNwWac9h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbncsVnzbsQuUZXUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_F5iGMDhS1Fc_9I94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSjleVzp1QIEKnkNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcMw9XtapWaSwhQOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1bc-i26HUGNpbl6x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]