Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just last night I was in such a hurry I had 5 minutes to miss my schedule. I got…
ytc_Ugz3jTH3v…
G
I take comfort in this oddly, because if AI is unable to navigate in a human wor…
ytc_UgzKAZy5f…
G
I was like this until I understood better how current AI LLMs work (somewhat, st…
rdc_jmggrbn
G
the ai losers are forgetting that making art isnt about how fast you can get it …
ytc_UgxsimHyD…
G
It's already here y'all...
But also, can you imagine what teachers can but AI c…
ytc_UgwMEDdRJ…
G
Another "A.I. will end society in 202X" video? Come on Steve, stopping kicking t…
ytc_UgziUiwCB…
G
President Trump's Tariffs is slowly bringing down United States and destroying t…
ytc_UgyvJoT5T…
G
Today’s commercial AI can’t “have experiences”. Having an experience implies tha…
ytc_UgzzYTLEH…
Comment
>The questions I have are these:
>
>- do humans and AI make the same kind of errors? Is the AI missing things that could be obvious to a human expert or vice versa, implying that using both would’ve allow detection rates neither can achieve?
Excellent questions. What we currently see is that the mistakes they make are completely different and not related at all. However, it does not have to mean both combined are better: there is also a large psychological component. You can see this in some of the "self-driving" Tesla crashes, where the human driver trusts the system too much as it is usually right, but can fail spectacularly. I'm not sure on the research about this for the medical field, but doctors for sure would need additional training.
>- How good is the sample data, really? When we train visual AI on something like facial recognition, we don’t have to be concerned that we’re teaching it our biases because we haven’t got any, we’re nearly 100% at being able to decide if there is a human face in front of us. But we can’t know which images, in which *we* could find nothing, could have subtle features that machine learning could indeed find. It seems to me that at best visual AI could be as good as our very best, but if we want it fonfind what we cannot, it seems to me we have to find a way to train it do so.
Great question again. Something we can do is use information that wasn't available at the time of the original data, for example follow-up data: you can train the AI using information that, for example, there'd be a tumour found within five years. See this from MIT about breast cancer: http://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Source: doing my PhD on this kind of stuff.
reddit
AI Bias
1569433514.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_f1emvcy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_f1e7zyw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_f1ecjca","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_f1ecudu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_f1ez3fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})