Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Really great to see this, Ethan. Wonderful and inspiring video. I too was impres…
ytc_UgypAgREQ…
G
@nmnm7742I actually don’t believe AI will naturally become malevolent. Look at …
ytr_Ugz_cITMk…
G
@Pumanic......Lived near silicon valley for quite a while. SURE there are MORE …
ytr_UgzcKLSMQ…
G
3:08 , 3:32 EXACTLY.
6:48 The banana falling off in the background while you're…
ytc_UgxuenQVG…
G
Didn't anybody notice that Elon Musk is always talking about what's going to hap…
ytc_UgzLP06et…
G
Humans prove everyday that they are flawed drivers. Driverless vehicles should b…
ytc_Ugz4wSMQ-…
G
Maybe someone should make AI spoof channels of asmon and xqc that copies them. Y…
ytc_UgxyUHLWF…
G
Doesnt seem bad, not sure about the AI though, current AI is still prone to erro…
ytc_UgzEWY5XA…
Comment
Facial recognition will always have flaws. As will any other algorithm built to determine identity. These algorithms depend heavily on the training set and the sample data used to make the comparison. It's likely that the images they had were not perfect for the algorithm. An investigator should be able to tell the difference between what is a good and what is a bad sample photo. The key word is "should." I don't think this discredits the use of such software. But I don't think it should carry quite as much weight as we cannot be 100% certain that biometrics are 100% accurate. To do so we would see far more false negatives and it would ruin the usefulness of such an algorithm. It's a tradeoff. They're tools and should be used as such.
youtube
AI Harm Incident
2021-10-12T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzX6Hu42s-t0fbl9bV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgwkEpSBoVZ7a_pF36N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgybJNN6IPzoTpnxEXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxPsPqemGHKPDDNN594AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxt-j4S8CvLntw1GiV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgxO0Uo0fSvKrvVGbmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugxq8u-uELmJtMhOnqF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgwXq4EQPgiP6lxwDaZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Ugy02uJF4cbSqs4N89F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzCfnoYUL3A58GTCRJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"]}