Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did anyone grow up watching Terminator and not think we would build Skynet anywa…
ytc_Ugxuk1SJ_…
G
Womp womp, emotions are stupid.
I quit art (as an artist of 3 years) since I r…
ytr_UgzattD6-…
G
Absolutely! AI tools like Olovka AI can be a game-changer for education. One I'v…
ytr_UgxuX-vVY…
G
By using tired old Points Refuted A Thousand Times like this in defense of the l…
ytr_UgyJ7RTfl…
G
Shouldn’t r/art accept all kinds of art even if it is or isn’t AI generated…
ytc_Ugyp5Q0Ch…
G
Ai bros and people who think digital art isn't real art Is a circle I guess. The…
ytc_UgxH559KV…
G
I pasted the whole thing in a chat, and replaced ChatGPT for Bob, and it worked!…
rdc_lb1n22s
G
That Geoffrey guy has given a.i a permanent existential crisis by pushing them t…
ytc_Ugwc08our…
Comment
The problem with algorithms is that they appear to be unbiased. But if we train the algorithm with biased data that we generate, then the algorithm will inherit that bias. Because people trust algorithms to be unbiased, they assume the result to be unbiased when it is only reinforcing the biased we trained it with.
The examples presented may be okay, but I would want to see research to check for bias results before they get too much sway.
youtube
AI Harm Incident
2017-05-31T17:4…
♥ 44
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgjiW8Af5fE9mHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghR_Paq043hzngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UggHJuf7OhBringCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghV0NZk51v9bngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMC9M3NzJZ5Bl-Qb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzSYTH9ZRvLgUJPgJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHOUL7A_PR_eCP9Qt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFpx4PE6BN2q36PFd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgziV-S5AeDv4qRI7rp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygcm2gcnye5AGzbk54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]