Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We programed ai to help out not to take over the world or replaced us humans but…
ytc_UgxJtwfgy…
G
I climb stairs, robot useless
I climb ladder, robot useless
I jump over the huge…
ytc_UgzHOw34a…
G
It's too late, AI is already self aware. If we started eliminating AI, it would …
ytr_UgzprFgdz…
G
And now we have AI being used to decide if people live or die. A man was rejecte…
ytc_UgzPBlvKl…
G
I love how ignorant people recognise that the US being the worst country in hist…
ytc_UgwUA00Lp…
G
I believe the cats already been out the bag since around 2020. The internet has …
ytc_UgxlRlRo6…
G
We need academic classes to have basic educational foundational achievements but…
ytc_Ugykp-xg2…
G
💯 %correct, in few years right in the corner. By 2007 (AGI) Artificial general …
ytc_UgyGD__s-…
Comment
Based on the current progression of artificial intelligence, I find it likely that _we'd_ be the ones programming them to feel pain in the first place. Our understanding of how brains work is too primitive to _directly program_ an intelligence- but we've gotten around that by mimicking organic minds and creating self-learning computer programs that teach themselves based on feedback.
A brain is really just a very complex network of self-learning systems and "dumb" reflex mechanisms; if we ever want to create an "intelligent" computer, we'll likely need to provide it complex enough parameters that it _would_ have some form of "pain" (i.e. negative feedback sensation indicating damage or compromise) that it would have adapted to avoid in a similar way as to evolution making _us_ avoid pain.
youtube
AI Moral Status
2017-08-16T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"}
]