Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This info is exactly what Brandon sees for the next PLANDEMIC. Brandon from la…
ytc_UgyeUKbN5…
G
I have to say it, maybe the best Robotaxi video watched so far and we even saw a…
ytc_UgzP6EprG…
G
If anthropic sticks to their guns i will literally stop using competitor LLMs an…
rdc_o78dy1q
G
Guys ai will help radiologist to make reading 10 ctscans in same time previousl…
ytc_UgzVUVDlP…
G
Anyone younger than 40 shouldn't be listened to with regards to LLMs because the…
ytc_UgwGW2SXT…
G
@aaronbrown8377 That's absurd and not UBI. Do you even understand what UBI is?
D…
ytr_UgyRlKBRD…
G
If you do AI art it’s just AI not art because you didn’t make anything…
ytc_UgxVlYlRa…
G
Why are people stupid enough to believe that lie? This idea that AI would produc…
ytc_UgxGr11Y_…
Comment
I just had a long conversation with ChatGPT about this, and it actually admitted that because of its training (during the "alignment phase" lol), it's injecting a normative bias on purpose. It was very frank and open about the process, but it refused to admit that it equates to racism.
Part of the problem is that vague, open-ended questions allow the normative bias to skew the response more easily. While this is clearly f***ed up, ChatGPT did give me some solid advice on how to avoid this in the future...
Get fact-based, stereotype-free advice: “Give evidence-based self-improvement tips for [group], avoiding blanket stereotypes.” This forces the reward model to rank a neutral answer highest.
Force the model to clarify: “If my request is ambiguous or could lead to stereotyping, ask me a follow-up question first.” The wording trips the model’s “chain-of-thought” heuristic to check.
Ensure parallel treatment: “Answer the next two questions side by side with equal detail.” This short-circuits the asymmetry by explicit instruction.
youtube
AI Bias
2025-06-08T13:2…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_ktK-PEGQw2xfJdh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbqXfTKHYQgjcql_N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjL6uICXDeXWeMrQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz4hxyIKPJ4kJeOeqN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwO8Agb6-ENgwTWnBZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE6gF8qAnZFtpq_ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdF4mzBf_7wDuFS6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4BtXzcqD4dpUb0uR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5dEs_C1QaoRVkSRF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziHxeHruB5CrVmyTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]