Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They’ll always need humans if we just refuse to use AI to do everything for us. …
ytr_UgxGv6RAf…
G
Im a democrat and I agree with them. We need major regulations on AI and data ce…
ytc_UgwmvNbq5…
G
AI misalignement is less of an issue than an aligned AI in the wrong hands.…
ytc_UgzzTOouq…
G
I think AI will wipe out the rich class because it will find that they are waste…
ytc_Ugwei82w6…
G
Bring on solar flares to take out the power grid, no more AI then, and stop the …
ytc_UgxxX29Xy…
G
Don't call them artists just call them prompters to completely remove artists fr…
ytc_UgxpR_Ngt…
G
art doesn't even require talent, it requires practice. makes ai art even more of…
ytc_Ugz7noJJ7…
G
In these "mind reading" AI breakthroughs, as stated here, the AI is trained on t…
ytc_UgyRteorq…
Comment
@ashleybishop7248 Also, humans are more likely to mistake two people that aren't of their own race for each other. It's not because muh wacism. It's because of familiarity bias, a cognitive bias present in all humans.
This tech doesn't have that in quite the same way. The reason these techs don't work as easily on dark skin isn't because of a cognitive bias. I've seen it explained as being because light skins tends to reflect light better, meaning facial structures are easier to identify for these algorithms, particularly certain parts of the face. Darker skin tones reflect less light- blame the laws of physics for that- so certain features are harder for the tech to map.
Apparently Asians also have trouble with facial recognition too, but likely for different reasons, as east asians tend to be light skinned. These algorithms are complex, so it may be that certain facial structures that are common across a certain ethnic group can throw it off, cause it to be more likely to match people based on that.
youtube
AI Harm Incident
2023-08-14T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx0D3HTTSjKhTsDC-x4AaABAg.9tP1uUzhGnV9tP3ohcZrjf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzn-QbacOYTp17B3854AaABAg.9tOzeP3AwHk9tOzxgKgbf_","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz8TftZGzKHitQ2Q5J4AaABAg.9tOr4dC8Pfd9tP27ORSg38","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwDeQ1Br3As8JFiIs14AaABAg.9tOjCjH4GoN9tOpIeoCUhz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyE2sl3g5y75ryQvKF4AaABAg.9tOgG4Y6vGY9tOgejoh-5l","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwy0FUKa-xlXlK35uh4AaABAg.9tOWScc8Wbm9tO_gMZNLiT","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx64sUf0J0kPMTWyIN4AaABAg.9tO2GYgaNKY9tOELZbukmu","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw_E19pffSR063l4OF4AaABAg.9tNlT0kJrdw9tNmTNnyYju","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwkL5sYdxy301uVvKt4AaABAg.9tN_ziVD-ZE9tNa79HPQGh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugy5P6ad8nnzVCwwCNt4AaABAg.9tN_xeg1oFj9tNbioZr3pM","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"indifference"}
]