Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The first two revolutions didn’t start the first two world wars. In fact, the at…
ytc_UgznL3_gZ…
G
Bro is literally waste bcz in the top left corner of the video if u skip the vid…
ytc_UgzaFsOP8…
G
I am not allowing any robot to cook my food no way in hell lol 😂…
ytc_UgxMD8XMm…
G
Current AI is absolutely horrible. There is no intelligence. Never trust the wor…
ytc_UgyTgyBbt…
G
That's an interesting observation! Your name spelled backward is indeed "evil." …
ytr_Ugz3eMIml…
G
Would love to see three taxes get tested. One of which the MIT guys are probably…
ytc_Ugx83fW1F…
G
There is nothing these parents can do. AI is the future, like it or not. Nothi…
ytc_UgzewbqmP…
G
I'm really glad you closed the loop to bring this back to the tech companies tha…
ytc_Ugwodifa6…
Comment
Problem is that people think these systems are unbiased, while it's usually fed with biased data. These systems aren't actually smart and don't really think for themselves. So even if you gave it a way to directly learn from all the currently available data, that data is still generated by biased humans, wich the system can't distinguish. Current "AI" pretty much only replicates what its trained to do.
So if you want an "AI" that predicts crimes, and feed it with data of biased cops that favor targeting black folk or something, the AI will just replicate that, while giving the impression that it is unbiased, because its just based on "cold hard data", wich is the actual dangerous part.
youtube
AI Bias
2023-11-02T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugylk6t_HLCoLlSZglN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpoaEyFLu7JfnysNh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwi6o2ePzDr7jFCTSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxd-1QXNJsOoAaCINB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzccBSdtul2lZ8XzoF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRlC9JLNb1xtopq4J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz5clJCCuohdvdVwap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwSNZBOojztzGmcYHd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzY-sPMNqnk1M8BYKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzd9KtVGPMgAIJLzht4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]