Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m seeing AI generated images on clothes in stores now. I walked into a local c…
ytr_UgyCaBaV1…
G
I work a full time job. The dream is still not to have to. The world would be a …
ytr_UgzItQZ-T…
G
There’s this YouTube channel that suspiciously good editing that makes predictio…
ytc_UgwuRo5fn…
G
Honestly I feel like if a patient is ok with getting surgery from a robot, a pat…
ytr_UgzX4_vim…
G
This is not real there's no robot footprints and there's no dust being kicked up…
ytc_UgzxfDiuz…
G
This AI video is pretty good but it didn't make the humans life like enough. Th…
ytc_UgwrwivAf…
G
14:38 The ai "artist" here: "How dare you be an artist." *types on nasty keyboa…
ytc_UgzzmI4Cf…
G
It’s not just ai that’s the problem- what we CURRENTLY have is also causing huge…
ytc_UgxlAz64_…
Comment
The saliency prediction for image cropping issue neglected a very important discussion point. Why did predominantly "white" people score higher in the demo with Gradio? Unless I misunderstood how Joss presented the technology, people were used to train Gradio's model, most likely American people seeing as it is an American company. America is predominantly "white" - and without delving deep into psychology, though I welcome criticism if someone believes I am oversimplifying - we often want to see more of ourselves in the world. So it makes perfect sense that in a country composed mostly of races collectively known as "white", gathering saliency prediction data from its residents would likely favour those races. I admit I am ignoring the fact that face detection algorithms can struggle to identify the faces of those with darker skin due to various factors mentioned in the video and by other commenters, such as the algorithm's training data or differences in light reflection. However, I believe it is fair to ignore this factor because if your saliency prediction algorithm - informed by your very own population - favours certain races, why would it be surprising that anything using the algorithm - like Twitter - favours those faces in its cropped images? If anything, Twitter's algorithm is showing its users what the majority of them want to see. Then, naturally, you could call the majority of Twitter's user base racist for not wanting to see individuals of darker skin. But is that criticism fair? If we hop over to a country where the majority of its races would be categorized as "black", would it be fair to complain if "white" faces were absent from cropped images assuming a saliency prediction algorithm was employed? Absolutely not, the algorithm is again showing what the majority of users wish to see. How is it reasonable to argue that either case is racism?
Joss and the individuals she spoke with had excellent points that algorithms - when trained or implemented carelessly - can cause real harm. However, jumping to the conclusion that racism is present before an in-depth investigation can be conducted is ludicrous. If we want to openly discuss this and create fair algorithms, we must assume that good will was present behind the algorithms' development until proven otherwise. Assuming the worst drives divisions and disincentives collaboration. How can we as a society work in concert if finger-pointing is such an automatic reflex?
youtube
2022-03-20T09:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGMdpyoG5x8ucG5Lh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmCezwNLwg0MZa6Zx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyr5oPe115ya_Xttct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4P880T1ideZAZU2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjaraUkGByesfkEYB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwsc7P3SzTZSp3_es94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyR-ChyFyv8WoNIG6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjGOqr3KwZoEqoWKV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywhXf5t_euZC17QCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugznf0WZahOaEh0yXtR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]