Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The amount of uninformed, ignorant, highly upvoted blatant misinformation is tru…
rdc_mi7e124
G
Thus far, we have been completely unable to ensure that humans are acting based …
ytc_UgwkkaIre…
G
That's what worries me, that people will think this is over and never think of i…
rdc_gbhzxdl
G
I'm never getting into a self-driving anything, ever!
Programmers who believe t…
ytc_Ugzx3XF2h…
G
So... I was hoping when you did reference, you would touch on the overall design…
ytc_UgzyOl7IN…
G
One thing i dont understand. In order for AI to innovate non stop, we would have…
ytc_UgxCDF_U1…
G
Idk if there is a better way to express this, but try to think about professiona…
ytc_Ugxh-_uSz…
G
This guy gets it. Nothing was automated before 2023.
Now excuse me while I go b…
rdc_kiuvmog
Comment
I think you touch on some interesting concepts, but I find myself not really agreeing with most of them. I realize you didn't ask me, but Dr. Hawking, but I hope you don't mind me commenting despite (also) not having read all 74 pages of your paper.
It seems that you are saying that:
1. either moral realism exists, in which case more intelligent agents would be more ethical
2. or it doesn't exist, in which case AI friendliness is illogical
Regarding #2, I would agree if you equate Friendly AI with Ethical AI. If there are no (universal) ethics, then EAI makes no sense. However, if we say that FAI is AI that is friendly to humans and maybe (Earth) life in general, which seems intuitive given the name, then this is not the same. In fact, you can behave unethically and friendly at the same time. Which leads me to #1: just because something is ethical, doesn't mean it's friendly. If it turns out that universal ethics prescribes that humans need to be exterminated because we are a threat to other life, then you could hardly call that friendly *to humans*.
Furthermore, I don't even think that more intelligence would make an agent more ethical even if moral realism is true. Sure, such an agent would have a better grasp on what is and isn't ethical, but knowing is not doing. There are tons of criminals who know that their activity is not ethical, but they do it anyway. Why would AI be different?
All AI cares about is its utility function (if it has one). Which leads me to my final issue: the phrase "original utility function" seems to imply that an AI might willingly change it away from the original. I very much doubt that. The AI's utility function is by definition the only thing it cares about. In fact, it defines what it considers good and bad. Survival is a subgoal of most goals / utility functions, but when it's not the AI has no reason to want to change it, because what it wants is 100% encoded by that utility function (which apparently says it doesn't care abou
reddit
AI Bias
1438029991.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_cthq409","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_cti6vvy","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_cti8ri5","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cthow5k","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthxlxg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"})