Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imma tell chatgpt that you were talking behind its back... lets see who survives…
ytc_UgxGP7Sfz…
G
10:15 what youre saying here also factors into the difference between someone dr…
ytc_UgwRRH12i…
G
Possessing the capacity to lie isn't at all necessarily the same as "being a lia…
ytc_UgxR1ExI-…
G
That's what the robot actually think...before make smarter robot....we have to f…
ytc_UghQX--95…
G
Artificial intelligence CAN'T tell EMERGENCY LIGHTS and legally pull over? If AN…
ytc_Ugzq7toNy…
G
In a few years, we'll be talking about developer brain rot. Programmers that use…
ytc_UgzH-qeBd…
G
Huge tech changes are impossible to fully predict. But human social relations ar…
rdc_ks5o6z0
G
So make laws against what A.I is and isn’t allowed to do and who and who isn’t a…
ytc_UgyBOFlf7…
Comment
@WaleSoleye If it's the case that it is a human error, I highly doubt that and just put that statement into my argument because I wanted to name both possibilities.
These AI's just determine on statistical probability and the AI in my opinion thought that the couple were presumably 60% gorillas and 40% human. So it picked gorillas.
What to do against it you ask? You can always code a fault tolerance to add a moral into the program that if the probabilities of both are high it always chooses the human because for us people it is morally more acceptable to classify a gorilla as human.
Probabilities can't always be correct. I understand the point of the video though, our moral concept is much more versatile as any AI or pc ever could comprehend and we have to understand it to fix things computers can't handle without our help.
youtube
AI Bias
2022-02-10T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzdU3mO52jAolMHtrp4AaABAg.9YJJqav1gN69Yayr26Vpf4","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz8MDSNMf7BG-OCuGV4AaABAg.9YIZUuZkNRn9YIt_efu-fE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgykJYXEN3cb-HebBG14AaABAg.9YGyNa1m5Zx9wE-yp1l5-G","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw_dflmFMB8JMppg7F4AaABAg.9YGsKeWrPU99YGvU7BvDKS","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw_dflmFMB8JMppg7F4AaABAg.9YGsKeWrPU99YSrps9IqkJ","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgxxkzJ3A9zRzyATob14AaABAg.9YGokBRovli9YGxBYkuEWP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_UgwoNS8whdeV-gbTtWN4AaABAg.9YGnRawww2l9YGyUXh5Ayj","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwoNS8whdeV-gbTtWN4AaABAg.9YGnRawww2l9YGzeVs-heo","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzHSYPGRmuw0JWgMz94AaABAg.ANKpuXyRZ6FAU2dKWJWKmc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwXz6H8WGIDvUMSa6N4AaABAg.AMNToxhJSVYAOpb379bzDs","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]