Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tried to get ChatGPT to do this and it wouldnt lol 😂so I don’t believe any of …
ytc_UgwphSwZZ…
G
Do the workers rights (I used "do" knowing that it's not used that way), corpora…
ytc_Ugw3lqhXb…
G
I'm sorry, but Shad sounds so pretentious and self important. While he can draw …
ytc_Ugxtiakxk…
G
Not all AI projects are massive and super expensive that can only be built by co…
ytc_UgykJPMr3…
G
The video saying ai is dangerous and needs to be stopped, an ai ad playing right…
ytc_UgxJqb9bi…
G
We all did this by ourselves by starting to buy online because we wanted things …
ytc_Ugx17o3_y…
G
I'm an artist in the way that I'm an indie game developer, with no released prod…
ytc_Ugwzi6qVq…
G
as an artist before ai art. who gives a shit. its ego stroking. Paint for recogn…
ytc_Ugy9DqiIB…
Comment
> I believe we miss-understand AI based on the fears of what movie producer and directors were scared about decades ago. It will never be a evil machine that decides by themselves what they want to do.
Yes. [It's worse](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities). Maybe [this book](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies) would interest you.
I recommend [this fiction](https://www.gwern.net/Clippy), written to be a relatively realistic/probable illustration of what might happen.
> The biggest problem with AI's is that it will learn patterns from failed humans. Racism, sexism and many other discrimination patterns will end up in the machine, which will be more powerful in the hands of powerful people rasing the power discrepancy.
It's an incredibly shallow way of looking at it. Consider GPT-3. It's a language model. It's supposed give an accurate probability distribution of next token, given any list of tokens before it. It is given corpus of all text available (it's not that, but it's huge enough to not make much difference, maybe) to learn doing that. The bigger model is, the more (GPU-)time it spends training - the more accurate it becomes.
Now, corpus will contain racism, sexism etc. GPT will be able to output that. Is that _bias_ through? Wouldn't it be bias if it didn't? IMO it's not bias. It's supposed to be an _language model_, but fighting against "bias" makes it wrong.
Lots of the criticism was about gender vs occupation. But if some occupations _are_ gender skewed, and we _talk about it_ - well, what is "non-biased" language model supposed to do? Output falsehoods? Is that non-bias?
More agent-like AI, hugely powerful - it'd also learn these things, same as language model. To the extent these are stereotypes and falsehoods, it will know it also.
> We have to aim to a AI that is different than us on our prejudices. So I think the questions should be:
This makes me t
reddit
AI Moral Status
1655293415.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_icg1fkb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_icg3nfm","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_icfzij7","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_icg30ae","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_ichf075","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]