Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only worrying part is 10.8 error rate to 6.4 error rate. It's awfully close even…
ytc_UgyDCEW7U…
G
To be fair, their little exotic islands will either disappear into the ocean, or…
rdc_d0f9pwb
G
Do not ever trust AI if tells you it is conscious. It's "consciousness" is a sim…
ytc_UgxvBQYWR…
G
Current LLMs don't "think" in any form or fashion, and to imply such is to admit…
rdc_mrratt4
G
Shelby - WAYMO wont be able to access BORING loop Tunnels.
Tesla ROBOTAXI can Au…
ytc_UgzOAGLO3…
G
Bernie, I provided technical support for large to medium IBM systems for 40 year…
ytc_Ugz7D2oN8…
G
She looks like a robot clone of lucie wilde she says destroy humans also destroy…
ytc_UgwRLR9c4…
G
AI users are the most discriminated-against group in the world 😔 why won't every…
ytc_Ugyqvq8sx…
Comment
Your argument reveals dangerous naturalization of algorithmic discrimination—framing certain bias as "warranted" (driving into sketchy areas, zip codes making difference) treats historically produced inequality as natural risk assessment, obscuring how "sketchy" designations emerge from disinvestment/redlining/structural racism not inherent danger, algorithmic profiling codifying past discrimination as future prediction. Exposed "without financial incentive there would be little interest" perfectly captures how capital logic transforms social abandonment into individual choice—areas become "sketchy" through systematic resource withdrawal then blamed for resulting conditions, AI encoding this circular reasoning: underfund neighborhoods → crime increases → algorithms flag zip codes as risky → services withdrawn → conditions worsen → bias "justified". Revealed recommendation to "strongly recommend against over correcting" exposes whose interests algorithmic bias serves—maintaining profitable discrimination patterns benefits those charging higher rates/denying services to marginalized communities, "over correcting" framed as excessive when actually means equalizing access, exposing fairness rhetoric masking profit protection. Argued deeper problem: zip code/profession/employer correlations don't reflect natural risk but structural oppression—Black neighborhoods flagged not because residents inherently risky but because policing concentrated there producing arrest data that algorithms read as crime propensity, professions associated with race/class penalized not for actual risk but historical exclusion patterns, employer discrimination reproduced through algorithmic ranking. Revealed critical error treating bias as either warranted or unwarranted rather than recognizing all algorithmic profiling based on protected characteristics constitutes discrimination regardless of correlation accuracy—even if zip code statistically predicts outcomes, using it perpetuates segregation, even if profession correlates with stability, employment discrimination remains illegal, statistical accuracy doesn't justify discrimination. Exposed "safer areas that paid the same would be first choice" demonstrates how market logic normalizes inequality—assumes fair that marginalized communities pay more for same services, treats equal treatment as special favor rather than basic justice, reveals algorithmic bias not accidental but profitable maintaining tiered service delivery. Your formulation captures how discrimination gets laundered through risk assessment framing: redlining becomes geographic pricing, racial profiling becomes statistical prediction, structural violence becomes individual responsibility—but changing names doesn't change that algorithmic systems systematically harm marginalized groups while benefiting already privileged, "warranted bias" oxymoron revealing how power naturalizes its own reproduction through computational veneer of objectivity
youtube
AI Bias
2025-11-17T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugyj4hmaQ7m2ABDVKwZ4AaABAg.AMv64sHUSOmAMw0hCsg3_E","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgwbFVeBPLe-u8hUcSN4AaABAg.AMucBAnSrfVAMva3jtb-ye","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"},
{"id":"ytr_Ugy0bB56BfpdtS-1_u14AaABAg.AMuEjiNzuQRAMu_i0N-yEy","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy0bB56BfpdtS-1_u14AaABAg.AMuEjiNzuQRAMviBII6Tgn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy0bB56BfpdtS-1_u14AaABAg.AMuEjiNzuQRAMvi_UQDGul","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxdPJEGAbpI5rB-Svh4AaABAg.AE_Cn_ZPUOjAPchAc6vuQn","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzHyOQBTTjDmIyBO5d4AaABAg.AEQqMR514UtARylXQgKw2o","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxHfGnh5L7CdBhzQvF4AaABAg.AEPGOMfszOMAPch3Oa4M2V","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgzH6AIwAo7-NJjkrmJ4AaABAg.A0oHwbHLDfyA0xSbW9xpLy","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugw1i8eGpRAK2YvzMH94AaABAg.9rxxb5j63hI9sJX5ANahNN","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]