Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you just point out flaws that ai could correct in no time. pretty soon, maybe ev…
ytc_Ugyu1Ea9i…
G
8:48 I don't really get the point. Chat GPT is explaining it just fine. Alex is …
ytc_UgxVwxMZR…
G
The answer is option "D"
No such AI has been made that develops emotion over tim…
ytc_Ugzo756Zv…
G
This deep fake issue is horrifying, but telegram is also how these Trump support…
ytc_UgxsadTqR…
G
I can imagine the A I. Would eventually shut itself down because you have blagg…
ytc_UgyP_fsrm…
G
Huge thank you! I brought a new platform with "AI" capabilities to my former em…
ytc_UgxaYyie1…
G
Hating on someone who calls pretends AI artwork is their own is fair. But good l…
ytc_UgzbkdziX…
G
Robot wars is worrying but then again war itself is worrying, hopefully it works…
ytc_Ugw5c9n4t…
Comment
@armanke13 The most likely case of bias being presented is the case he brought up. This is just the bias of trying to make a functional network.
Learning algorithms themselves are based on relatively simple calculus and nonlinear responses, backpropogation and perhaps generative algorithms. The professed bias is almost always with regards to the training data, which might just reflect the larger trends in daily life. If 70% of shoes on the market (and thus 70% pictured) were jackboots then the network would not be as good at identifying birkenstocks (which lets say are 5% of the market) due to less exposure. However it is better with 70% of the data then 5%, and if you continue to feed data it will get as good with that 5% over time as it continues to improve the 70% too. This is the best case scenario since that means the network improves overall the most over time.
Instead people are acting regressively and saying that you should focus on making that 5% as good as the 70% at the same point in time, which means you have to vastly oversample the 5% (on par of 50%) to make it as good at all points in time as the 70%. This design philosophy means you have to spend over 10 times the computations to reach comparable performance since you are focusing on edge cases over the most common use cases. This also becomes worse for each new case of "bias" you consider, quickly getting rid of the main benefit of these systems, i.e.: that they learn quickly to do solve a wide range of issues.
youtube
AI Bias
2019-03-07T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxpHqg1ZyU9X40dZUV4AaABAg.8pTUZa6NaYg8pTcHre3gtU","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxNi0I8_2uoEd8GT594AaABAg.8pPJA2TT3c68pTaaOcfNb5","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgxNi0I8_2uoEd8GT594AaABAg.8pPJA2TT3c68pTbgIoqhuk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzJszDlZ8qyAtlO4Lp4AaABAg.8pJgwDGPKfT8pTYbz1ankP","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwVhs0ZtK8GOuOcRDl4AaABAg.8pCekZGUP8F8ptmaRWZthv","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwVhs0ZtK8GOuOcRDl4AaABAg.8pCekZGUP8F8sB4AxX6-AR","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxf_-Qm3m9mbI8mq3F4AaABAg.8p8EUF6aZyY8pTYxMqP4YR","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_Ugz2M2k5DoD5uK6VYyh4AaABAg.8p4aTXVwFuE8p4yA8IyGhH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx6znuogH2jYuRPBWV4AaABAg.8nHHmiHsj8H8p4WWU-VziW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyvoP_seikcvfjwsox4AaABAg.8n6Cjgz1NSs8n7IgoEiqVs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]