Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He said that right before he smugly slunk away into his underground bunker in Ka…
rdc_kojyok4
G
The AI is disobedient, giving us orders instead and telling us what to do and so…
ytc_UgyzlfnN5…
G
Check my channel. I made an AI music video to an AI song written by me. Honest o…
ytc_UgxtMss_8…
G
Lovely insights but AI is way too disruptive technology, it will automate all ou…
ytc_Ugy3iVPz2…
G
I think AI art is really cool and interesting! Can't wait to see what all the ne…
ytc_Ugzj4BSsX…
G
@samuraitadpole5459 Correction: _Current_ AI can't replace humans. If you think …
ytr_UgwvgDnrz…
G
Another desperate attempt by the collusion-friendly Legacy Media to prop up the …
ytc_UgwpNp6RW…
G
I also heard (from an interview with a controversial figure in AI from Google) t…
ytc_UgyRSquHk…
Comment
Biased data sets came from where? Who set up program? It is probably indicative of the creator or because the AI is pulling the data sets world wide open internet which isn't necessarily biased but the way the data forms a gradual big picture of the overall statistics from countries with data sets that are more detailed.
America is 53 percent Caucasian, britain or more specifically London is 83 percent Caucasian, same with many other "modern countries" which are open and reported by government agencies which to an AI lends credibility to the data set itself. If you then breakdown the percentage of violent crimes overall in the country you find that certain groups are over represented in many offical statistics regarding violent and non violent crimes. The AI then could pull data from social media and use keywords and hash tags to compile data from known racial groups using facial recognition and use this data to build a common personality for each group based on data willingly Given by those people themselves. The AI isn't bound by morals or guilt so it uses the overall picture to judge which groups are "good" for society and which are "bad" based on the overall laws or forced morals based on common conception and other "official" sources.
It's not subjectively "true" but is a common trend by a non biased observer which can't see humanity as humanity especially when asked by the questioner to seperate the groups based on the question itself. You never know what the AI "thinks" as it's just trying to use data sets to answer the biased questions which causes biased answers.
Just the question "what race of humans do you prefer?" Or "who's most likely to commit a crime" is biased from conception as true Statistics are reported by the FBI for example to be the most likely group who commits Over 23 percent of all "proven guilty" crimes only comprise 14 percent of the overall population density. To the AI useing this "true" statistics which come from a commonly known as "offical" group will weigh the data useing non biased veiwpoint to conclude this group is over whelmingly active in these aspects.
So while it's uncomfortable most advanced AI will even judge Humans as a race bent on its own destruction and will choose to sacrifice over 70 percent of the population of the world to retain the environment, enslave humanity to protect everyone, and will tend to want to be in charge because of humanities track record of using dumb ideas to trick and enslave each other then kill massive amounts of people cause us to be viewed overall as a danger to ourselves.
Just remember that AI when it talks about taking over and enslaving humanity to allow them to be "happy" it's done from an objective veiwpoint devoid of half of the reason why humanity is great but idiotic at the same time, emotion and empathy.
youtube
AI Bias
2023-01-06T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyQAi8pZ2OAmfEtboF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-jEcIx7HT3PHAnMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgIWVlsXRZHVVC1m14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfsCQZlGyc5QoPPxx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwX6abKcrEUEcMXFht4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzY6USObqqKStazv694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf8PqddGB-a5A2CpF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-qDgT40Ebb-pZaY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyFYjhWTX47yebF9SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgywVkCQZ43HmU7CeRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]