Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing against you but using AI to read an anti-AI script is so funny to me LMF…
ytc_UgxeDe6Hn…
G
And then I had AI read Dr. Yamploskiy's works and apply to what my children shou…
ytc_UgzDCBgs1…
G
I’m so glad that AI wasn’t around when I had my first psychotic episode. Thankfu…
ytc_UgxiWHRew…
G
That's ridiculous, you just forced him to affirm you. You can use manipulations …
ytc_Ugwigj-yV…
G
I can't help but think, when the guy says 'they haven't seen the ghost' that he …
ytc_Ugw20fzZV…
G
I didn't know ai picture generator supporters have been whiney about their unwil…
ytc_Ugxz-Jgw3…
G
While she talks of Using AI to provide more quality time for herself, she did no…
ytc_Ugwl74BRP…
G
Guy: Nice job! I’ll take my gun back.
Robot: “I’m sorry Dave, I’m afraid I can’t…
ytc_UgwLGEz8V…
Comment
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Here's an article that describes the paper that Google asked her to withdraw.
And here is the paper itself:
http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf
Edit : Summary for those who don't want to click.
It describes 4 risks
1) Big Language models are very expensive, so they will primarily benefit rich organisations (also, environmental impact)
2) AI's are trained on large amount of data, usually gathered from the internet. This means that language models will always reflect the language use of majorities over minorities, and because the data is not sanitized, will pick up on racist, sexist or abusive language.
3) Language data models actually don't understand language. So, this an opportunity cost because research could have been focused on other methods for understanding language.
4) Language models can be used to fake and mislead, potentially mass producing fake news
One example of a language model going wrong (not related to this incident) is google's AI from 2017. This AI was supposed to analyze the emotional context of text, so figure out whether a given statement was positive or negative.
It picked up on a variety of biases in the internet, considering homosexual, jewish, black inherently negative words. "White power" meanwhile was neutral. Now imagine that such an AI is used for content moderation.
https://mashable.com/2017/10/25/google-machine-learning-bias/?europe=true
reddit
AI Responsibility
1612445306.0
♥ 3350
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_glz18kj","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_glzgqjm","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_glzj2qc","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_glzywdq","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_glzsm4r","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"}]