Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI + Quantum Computing = disaster. Aside from the nefarious aspects; once people…
ytc_UgzyxcRHM…
G
I argue with ChatGPT daily as a Web Programmer...when it gives me crap responses…
ytc_Ugwz9oKDX…
G
Luddites are just mad that they had to spend 10 years in law school when Vibe La…
ytc_UgxG1ULJj…
G
The problem lies in the AI users rather than the AI program itself tbh, cuz some…
ytr_UgyyH8vrS…
G
As a college student in my super senior year- I’m gonna use AI if your entire cl…
ytc_Ugy6-hnQZ…
G
Will UBI be bottom tier, or middle? I don't trust the crap myself, they rig the …
ytc_Ugw2ak0wb…
G
This. The single most viable / profitable task AI can currently do is easy-mediu…
rdc_nm8o24l
G
Hi @dannyjodie4736! Thank you for commenting. I must say, a metal battle robot i…
ytr_Ugzp2v2E5…
Comment
I suspect that the neural nets saying the right is wrong and dumb is mostly because their training data is dominated by people on the left saying the right is wrong and dumb. Of course the right *is* wrong and dumb, much like the left in that regard, but I don't think either is *so* wrong and dumb that you couldn't train a neural net to output more-or-less plausible rhetoric from either side if you gave it the right training dataset. It's not that hard to make either rightist or leftist rhetoric coherent enough that noticing its inconsistencies is beyond the ability of our current AI (and for that matter beyond the ability of most humans, hence the enduring popularity of bullshit political ideologies).
Additionally, don't forget that the neural net's ability to learn patterns is constrained by its own internal structure. On the face of it, it's plausible that rightist and leftist rhetoric are both wrong and dumb but one of them disguises its wrongness more than the other in some way that makes it harder for the neural nets to pick up on. In my experience reading material written by humans from both sides, the right seems more willing to commit openly and concisely to their wrongness, while the left is more inclined to write massive tomes propping up their wrongness with elaborate self-justifying theories that take effort to pick apart. It wouldn't surprise me if this biases AIs towards the left insofar as they don't really do enough reasoning or recognize large enough patterns to identify bad leftist rhetoric as easily as they identify bad rightist rhetoric.
(Just for fun, I asked ChatGPT whether chatbots are more likely to notice mistakes by one side of the political spectrum over the other. Its answer was too long to post here, but it leaned towards the thesis that leftist rhetoric is probably harder for AIs to identify. Like anything it says, take that with a generous helping of salt.)
reddit
AI Moral Status
1750539675.0
♥ -9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mz9pzew","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_mz1pzdf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mz4d5tm","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_mz0ag0x","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mz0ogi8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]