Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't wait for the day when AI will replace social workers, police, and all th…
ytc_Ugxw5810w…
G
1. 32 hours work week.
2. 45% of board members will be selected by works .
3. 20…
ytc_UgyNlRFut…
G
If AI takes all the jobs, who would buy the stuff the companies are selling and …
rdc_nc5cvyg
G
I asked it to make it more comprehensible and appealing to more people and it re…
ytr_Ugxc4B6bC…
G
I really see very little that is good about AI. It is just a corporate tool to …
ytc_UgzFboJu5…
G
Duh, AI is not a godsend, not until we radically restructure our economy. Even…
ytc_UgxNA85Bl…
G
AI's in life or medicine are a VERY bad idea. They do not help us. They make us…
ytc_UgxcmFB4Z…
G
The problem with all these amplifies, like weather report, and stealing food fro…
ytc_UgwvEuihc…
Comment
Training takes time. Teaching it to differentiate the two could take many days if not weeks. Telling it to not call stuff gorillas takes a minute of programmer time.
It's obvious it's a quick fix, that doesn't mean they're not also training the AI until it's good
reddit
AI Bias
1515864528.0
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dsmgrdl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_dsmxo7l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_dsmjoq8","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_dsmvm80","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_dsphawm","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]