Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like people should actually do shit about the AI data centers like js run…
ytc_Ugw8JXGzN…
G
On the subject of Luddites: they were right. The industrial looms that were intr…
ytc_Ugw4gxILT…
G
fake artists will defend their decision because it makes making art easy. an opt…
ytc_UgyYZGqNr…
G
Don't ever trust what the AI says. The AI is designed to agree with you, no matt…
ytc_Ugw0xAmH4…
G
The latest robot has an IQ of 155.. Einstein's was 160... And to give you an ide…
ytc_UgxV9PHS3…
G
Nearly ALL AI in the west is controlled by the tribe. Please research the Sayani…
ytc_UgzykEnJ2…
G
Ai has its pros. But, big corporate heads see it as a dollar sign. Bad people co…
ytr_UgylcvNPC…
G
ChatGPT has actually become so good and so shit at the same time. The result it …
ytc_UgyfKTCIb…
Comment
Without looking at the input data, the programming, etc it's hard to say what the actual cause of this could be.
Crime prediction? That should just be right out. Should not be an option we even consider beyond 'fuck no this is a bad idea'.
Medical analysis? This could be incredibly useful. Though every time stuff like this is reported this way, there is a bad habit of selective omission of data and factors to paint a picture because 'racism/sexism/x-phobia' callouts are promoted by the algorithm, so I remain skeptical of the conclusion.
It's entirely possible that the AI was aware that they might have had a more serious illness, but statistically had a better likelihood of overcoming it naturally. There are differences in average outcomes (something an AI would pull from for data) for different diseases among different ethnic groups. However, the lack of data makes it difficult to say for sure, but somehow I doubt a medical "AI" is sourcing its data from /pol/.
youtube
AI Bias
2022-12-23T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzN4ehBMRUHTBMX_sl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxhukVN13ZvGJUOF14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcciXreGXlaX5oLoZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL7-kleEqXBiVK6Yl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwL_uAGmzgr2iWIaEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybK1nCRtEsXdhNZox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmnO7IXSNp18Dy3j14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxwi8QYk4zNfaq_O_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgycIZXh9pWrj1Sx7Nx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyxzys-lUONZCPjnyl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}
]