Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah look at the list of Waymo accidents, this is pure insanity. The only reason…
ytr_UgxdTXr9V…
G
Generative AI is but one of a diverse tapestry of mediums to produce art, so the…
ytc_UgyGCJUbT…
G
4:20 if people are being honest with themselves it’s obvious that places like op…
ytc_Ugx3_Cvnb…
G
Automation and technology has been replacing jobs for thousands of years. Amazi…
ytc_UgxdSseBi…
G
@bigguy74genx face recognition can show how focus you are and can track where y…
ytr_UgzUAKHAy…
G
naaa i think you work for the Chinese gov. Karen Hao. You don't know what you ar…
ytc_UgxauqgFu…
G
The worst part about all of this is that it was entirely predictable. Large lang…
ytc_Ugwtq_2xt…
G
In 10 years our streets gonna be just like GTA-full of AI traffic that lack spat…
ytc_Ugwe8eTVW…
Comment
This video is extremely one sided. The ProPublica article completely lied. At no point did they determine that the algorithm was biased. Their analysis began and ended at "the algorithm predicts that black people recommit crimes more often than white people", and concluded that this represents bias. But it doesn't. The R-script ProPublica used (you can run it yourself) indicates no bias; i.e. it doesn't over or under predict. Black people, in their own dataset, do, in fact, recommit crimes more often than white people and Compas predicts this in a well-calibrated way. ProPublica's own analysis says this, but they lied when writing the article and hoped that no one actually looked at their numbers. You can see the article "How to lie without statistics - ProPublica edition" by Chris Stucchio if you want more details on this.
youtube
2022-07-26T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGLjPhbv7L5DIQvJB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUT2ve0yW5k8YrR654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-aveiVnwA4amrust4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRvpAAnZnlQG7lVsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyQglA8BqAtm21JaeZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyx54w0jVvm3e_kP8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHUzQ-UNWEXF-Z6yN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5JfyOgqMmDf4ya8J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz6kNrE6viSmd0_jax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxjf8jwbfTdZ3IiWy54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]