Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YES thank you. I've been saying this for a while now, ever since all this A.I. s…
ytc_Ugyj_I-Tp…
G
I am very upset with our current government and realize even a flawed dictator w…
ytc_Ugz5T85Uw…
G
So for everyone here, I'm really curious -
Let's say I wanted to make an AI for…
ytc_UgyXy4S1A…
G
You spent half the episode discussing the narrators prosthetic arm but not a sin…
ytc_Ugz7AejvO…
G
@ivankaramasov He is wrong. He is ignoring why were are sentient. We are sentien…
ytr_UgyzC7hUw…
G
Thank you for sharing your thought! The movie "I, Robot" definitely explores int…
ytr_UgxPyARTg…
G
Bring on AI, I will be very interested what it decides to do with us.…
ytc_UghU4G7qx…
G
Current AI, yes. The 'risky AI' part comes up when the AI gets to be smart. Unti…
ytr_Ugw-vK8R4…
Comment
How is it an AI if its objective is only the optimization of a human defined function? Isn't that just a regular computer program? The concerns of Hawking, Musk, etc. are more with a Genetic Intelligence that has been written to evolve by rewriting itself (which DARPA is already seeking), thus gaining the ability to self-define the function it seeks to maximize.
That's when you get into unfathomable layers of abstraction, interpretation, and abstraction. You could run such an AI for a few minutes and have zero clue what it thought, what it's thinking, or what avenue of thought it might explore next. What's scary about this is that certain paradigms make logical sense while being totally horrendous. Look at some of the goals of Nazism. From the perspective of a person who has reasoned that homosexuality is abhorrent, the goal of killing all the gays makes logical sense. The problem is that the objective validity of a perspective is difficult to determine, and so perspectives are usually highly dependent on input. How do you propose to control a system that thinks faster than you and creates its own input? How can you ensure that the inputs we provide initially won't generate catastrophic conclusions?
The problem is that there is no stopping it. The more we research the modules necessary to create such an AI, the more some researcher will want to tie it all together and unchain it, even if it's just a group of kids in a basement somewhere. I think the morals of its creators are not the issue so much as the intelligence of its creators. This is something that needs committees of the most intelligent, creative, and careful experts governing its creation. We need debate and total containment (akin to the Manhattan Project) more than morally competent researchers.
reddit
AI Bias
1438019917.0
♥ 65
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lv8lnbd","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_lv8cgsc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthw656","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cthxq37","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_cthzy1i","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]