Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My thing is were imposing our own negative beliefs on AI but what if an independ…
ytc_UgyIv1-i4…
G
Automation and robotics trend is a self halting process. If people don't allowed…
ytc_Ugw5v-_QN…
G
Artificial intelligence in the most common form (as a being with it's own concio…
ytc_UgiveUrRx…
G
Autopilot is not meant to do what you think actual autopilot is meant to do The …
ytc_Ugw4t6_xl…
G
This looks wonderful, but it wouldn't be feasible given the behavior of many kid…
ytc_UgxwwO6ml…
G
If you AI people keep being so Ambitious, u guys better know when to pull the pl…
ytc_UgxlCAKKT…
G
These AI image generators (no way in hell am I calling them artists) that're mad…
ytc_UgyQ_tMCr…
G
@MillionDollarAIStudio I feel editing is a personal thing. It's more of a feel t…
ytr_UgzVkG5uJ…
Comment
My personal take: Any entity able of self-reflection (which is a more nuanced take on consciousness) deserves the same rights, most notably including such things as self-determination (because I can't see any kind of society working out long-term, where a being able to reflect upon it's action is not allowed to actually determine itself which actions it wants to take).
The only problem/question left with that philosophy is determining who is able to self-reflect. Possibly quite a few animals beyond just the homo sapiens sapiens, and as for AI, Neural Networks might come very close (since they 'write their own code'... just that they rely on outside input for evaluation of results (scoring) and afaik tend to write their code randomly)... and any Post-Singularity AI would fulfill this criteria by definition.
youtube
AI Moral Status
2021-05-03T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzY9MQI6DX_CCOFXb94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzKJIEuhxY-PXo3Kkx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxhqe0SG50DCpK3xo94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwG83mfjMxmSl1KXVR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgCs0eAsqI8ZYDNHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuyJDKroN6U71Mni54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxfdAbhMkl7VF7nId14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx25490C9O0istXlBx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzNlya-f1wMwjftKcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw1GgJxNzYvlH5bWJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]