Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Drive a Model Y, the lack of radar makes the AP very unrelieble and useless in s…
ytc_UgxlaLVYw…
G
The root of all these problems is AI imaging and services tbh. The rampant progr…
ytc_UgxquQf68…
G
I sometimes use AI to create a baseline and then spend a good 3-6 hours manually…
ytc_UgzTF2UTs…
G
Lol musk and space twitter were literally pushing that kind of rocket commutting…
ytc_UgwfAtiKO…
G
„but you accuse the AI of doing the same things you and your ilk did to learn yo…
ytr_UgyZ0uTVR…
G
@fbibarbie The difference is that you are a conscious person. Not a mindless alg…
ytr_Ugwk8Fx5f…
G
I just realized it may be the only thing stopping ChatGPT from being conscious i…
ytc_Ugxztb44J…
G
AI has the emotional capacity of a psychopath. And the people who profit from it…
ytc_UgwmI8Brt…
Comment
so in the end it was right in all three scenarios with slight errors. preferring men over women is justified because men on average succeed more, it treating black patients worse is from the ai looking at the history and seeing coloured people being treated worse and the third guy had a criminal history so it calculated an overly high shooting rate. Ai will only look at data and interpret it to its best ability, I think the fact that it is literally unbiased is good and a push for it being censored is just shying away from reality
youtube
AI Bias
2023-11-04T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwKClSYNew9MYWFU3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2eAaWkbByEu-0zRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxxgs-JskahLWYk5mF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwU34YbZ13zUQ6UTOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFsjruT0qBY2fUBhR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzuTDF7tCCujd_oCed4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw59LlgdOrvZ2bBi6d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyoi7pFk_sh-LUOBnJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDCiLz3vztMrpeRSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUIlsUGEKJJsRQ7Vt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]