Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Asked Bing to play Rock Paper Scissors. Bing picked Scissors. I picked rock. I t…
ytc_Ugx6fyOtp…
G
excuse me?How did we go from "I love art!" To "ART IS SO BAD BECAUSE CREATIVITY …
ytr_UgxS6li4m…
G
I've caught wind that there is a group out there that is working with AI to avoi…
ytc_Ugykb9VsV…
G
10:55 when they talk about ethics they say that "humans are not the most ethical…
ytc_UgzdfEb3c…
G
A lot of businesses and organizations who adopted ai and laid off a large portio…
ytr_UgwrqI8_3…
G
AI is only useful if it is interfaced to real work. If people dont adopt it to d…
ytc_UgwbLpWpZ…
G
this is so dumb ahah, in 2028 the only thing that will change is deepfake porn v…
ytc_UgxMIWTSU…
G
There's a creepypasta where an ai becomes conscious and then pretty quickly deci…
ytc_Ugw_LJm-1…
Comment
6:13 - 10:56
If I'm following this correctly then the argument presented here is basically an argument against using a brain mapped dataset to create A.I as that would lead to the creation of "clones" or robots that are so human as to be a moral patient.
The problem is that if we discover an ethical non-invasive way to map the brains of infants, aggregate those scans and use the aggregate to randomly produce computerized brains (A.I) then you *will* have A.I. that are moral patients. If this is possible then it is inevitable, you're not going to stop China, India or Japan from doing it if they conceive that there is a benefit to doing it.
youtube
AI Moral Status
2020-07-13T02:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwqoed4lf_k2U0ltB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz65U1X58QEexSDBx94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyjPvCUroBKZ62kxl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxJxByxdAhlPXPTdXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmcrwaXmz2NG4URFZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwBUyKtpakcyl4wwIl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx60WlpA2rlF7T60LZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwK0SbACHJ5NtbXL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwlrLg5UKyLe_u7rrd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRm2GnzhSvWmXj-YR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]