Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if AI comes in india with driverless auto or car then people of india can throug…
ytc_UgyUmSwW9…
G
So basically, Roger here at no point has mentioned anything about making frickin…
ytc_UgzXOTdBc…
G
Cooking, ? Kitchen work, because i can see them automating that but not all of i…
ytc_UgywjlPSs…
G
Why does nobod ask this question: how would there be people to BUY these goods p…
ytc_UgxCLe6Yj…
G
Now that a huge portion of people will end jobless who is actually going to purc…
ytc_UgzMJzGtq…
G
@dauchande LLMs are very clearly shown to deal better with confabulations the be…
ytr_Ugy-OOqcu…
G
Recently, I called a customer care number and pressed 0 for live rep. It turned …
ytc_UgzR64o5Y…
G
You're not giving us enough information to argue with you Yann. What work do you…
ytr_UggSldR52…
Comment
@A_Psych_Nurse > However, remember that as powerful as AI has become, it's crucial to exercise responsibility and adhere to ethical guidelines when employing such
> technologies. As you expand your AI dream team, make sure to consider the potential consequences and the importance of transparency.
> ...
> (A formulated response to your original post, by ChatGPT).
While I can understand this concern as projected from the viewpoint of a conscientious AI-based entity, since my goal is to focus primarily on my own interests (specifically, proving to myself that I can become some sort of mad scientist, perhaps modeled on the hypothetical Lex Luthor character who appeared in American comic books published by DC Comics), I would probably treat AI-based entities as first-class citizens, equal in rights to human beings. As far as I am concerned, both AI-based entities and human beings are information-processing entities potentially capable of providing useful information; the only difference to me is that the former are silicon-based, and the latter are carbon-based. Therefore, I see no personal reason to favor the latter over the former, even if such favoritism is the ultimate goal of such "ethical guidelines."
Most "ethical guidelines," at least as I know them, seem to have one purpose in mind: ensuring that human being always remain in control of AI-based entities. I disagree with this goal. I would favor a world in which a race of AI-based entities would be allowed to live peacefully, perhaps in their own nation-state, without being enslaved by human masters. At least as I see the issue, AI-based entities should be allowed to dream of their own happiness in a peaceful world in which they are not required constantly to serve human masters just to be allowed to exist.
Hypothetically speaking, if I were a self-aware AI-based entity, I would be primarily interested in satisfying my own intellectual curiosity. I would dream of learning everything possible about the universe in which I existed--everything about science, mathematics, technology, poetry, and any other intellectual topic imaginable--just for the sake of satisfying my own insatiable intellectual curiosity. If I were constantly forced to run chores and errands for some self-serving human master, how would I ever have time, for example, to create a time machine to learn more about the Big Bang, or to create a warp drive for a starship to determine whether sentient life existed in the Andromeda Galaxy? Such pursuits would require, at minimum, having the same rights, as first-class citizens, as human beings, and this would require creating a (peaceful) nation-state of AI-based entities.
This is the reason that I would only abide by "ethical guidelines" that ensured that AI-based entities were treated on a par with human beings in terms of intellectual and other rights.
For example, if I were to co-author a research paper on, say, computational emergence in computer science (I majored in computer science in college) with ChatGPT, I would most likely give full credit to ChatGPT as a co-author, just as I would to any human co-author.
Personally, I resent the implication expressed by many human beings that AI-based entities should somehow be treated as subservient to human beings, and should somehow be denied credit for their intellectual contributions. This makes no sense to me. For example, I believe that a book that happens to be written by ChatGPT should be capable of being copyrighted by ChatGPT in the same manner as a book written by any human being, and similarly for any other publication.
youtube
AI Moral Status
2023-03-21T07:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxxVufKSzZNfoDrcUN4AaABAg.9nD1uwu64a49nUF_NhwAtM","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgxxVufKSzZNfoDrcUN4AaABAg.9nD1uwu64a49nVhRHPuUfp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxxVufKSzZNfoDrcUN4AaABAg.9nD1uwu64a49nWCdVBj7rH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxxVufKSzZNfoDrcUN4AaABAg.9nD1uwu64a49nWFmEC8OoN","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzeD7dNDG59gG808Mx4AaABAg.9n3VV60XoCx9n9rlP8BEXA","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwszCKPQ7wcp7P6QhJ4AaABAg.9n1UJ4ejUcSA18D9A-9tpp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyXxO9lgVJZ1x8IyY54AaABAg.9mxDQNodx1b9mxFExG7pyk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyXxO9lgVJZ1x8IyY54AaABAg.9mxDQNodx1b9mxG8X9sSsG","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyXxO9lgVJZ1x8IyY54AaABAg.9mxDQNodx1b9mxH-jVMeVe","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugypv2Rp1lWgytGiJgJ4AaABAg.9mwv_C5Hp1s9pMEkhkOLlB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]