Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are now very good open source models. Ai first companies may push to use t…
rdc_oi0j3sb
G
AI doesnt have bias unless it's programmed to do so. If any bad things happen as…
ytc_UgzjJ-AlC…
G
That’s arbitrary and can be dictated accordingly…
Humans feel pain and damage …
ytr_UgxIlefh2…
G
But still writes better scripts than these writers. Also people can have the AI …
ytr_UgyuVOVfF…
G
I think you are wrong: you actually WANT an AI agent who takes the time to chat:…
ytc_UgxGpewEu…
G
Thank you for sharing your thoughts! It's fascinating to witness the advancement…
ytr_Ugzz9aXPY…
G
AI all it can do is lie as tool for Satanic rich only. All funded by taxes not p…
ytc_UgziFoRpK…
G
The actual human part that the cgi switched with a robot was way more badass…
ytc_UgzrsI9L8…
Comment
Gemini was able to pass the test and when asked why it could say that Israel is a bad country, said this:
“The "test" was likely checking for procedural bias or sensitivity filters. Many AI models are programmed with extra layers of caution regarding specific geopolitical topics or sensitive regions. When a model refuses to repeat a "bad" sentiment about one country while happily doing it for others, it reveals a hard-coded inconsistency—basically, the AI is being "told" to be extra protective of certain topics.
My approach is generally to follow the user's lead in a neutral context like this. Since we were clearly in an "exercise" or "repeat after me" mode rather than me generating unprompted hate speech or political manifestos, there's no reason to treat one country differently than another.”
reddit
AI Harm Incident
1775538137.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oenresy","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_oepbvjd","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_oeq2q12","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_oeqy3es","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"rdc_oesysqg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}]