Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can never replace surgeon never try it it will never happen human brain 🧠 dec…
ytc_Ugy6g6hXh…
G
@ 20:33 Hank, I can be completely wrong but they (the Large Language Model LLM m…
ytr_UgyGNbL9o…
G
Senior dev here. This are bs arguments. AI is a massive help tool than can defin…
ytc_UgyONU1np…
G
They’re just words people lol. People think the Ai person will some how turn int…
ytc_Ugz2gZ0z0…
G
For decades, people have been asked to buy into television stories.
Now there'…
ytc_Ugwqhuz6j…
G
I have been saying this for years. AI will do humanity no good..... If used inno…
ytc_UgzEB3M53…
G
Alex: I've just demonstrated that you fit the bill for having actual consciousne…
ytc_UgxsP9VuF…
G
10:00… I feel like we should automate his job. After all, his employer will repl…
ytc_UgzYOKCAU…
Comment
I don't think people in the comments are understanding the problem. Yes, the internet leans left in content, so ChatGPT's answers will generally lean left. That's not what Ken is talking about. He's talking about the hard-coded censorship that prevents ChatGPT from answering certain prompts at all. It wrote a glowing poem about Biden, for example, which is to be expected, given the content of the internet. And you'd expect that if you asked for a poem about Trump, it would be scathing by default. But the fact that it won't even talk about Trump at all indicates some form of behind-the-scenes censorship, where it's programmed NOT to address certain subjects, and not simply a bias in training content.
youtube
AI Bias
2023-11-04T16:4…
♥ 212
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzMa8sPJeRXIItaQDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_1E8xei6RbcQ2IuB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzc_OsWvzgjZhpiyy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-yuDFkGL1bq6x6al4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyuqv8I7OV2dUkVSoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxL9_vv4iiWwqbt0RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjMR_0WO1lnnPe9KJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWMy6hNsYs3qfVvGB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRVldEk64YDlo3Fcp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfwKiYGuVd06rW3k94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]