Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My worry is when we start applying human rights to the AI models. Just like we g…
ytc_UgzYz982A…
G
Thank you, Democracy Now team for covering this story. It really should get more…
ytc_Ugwq0_9p0…
G
That commercial part, ngl but that was way too specific and long. It went all "L…
ytc_UgxfPaJc3…
G
I keep waiting for the penny to drop; WE’RE not conscious. It’s simply an illusi…
ytc_Ugxc4IoGc…
G
AI is so advanced now that even the people who built it are stepping back, reali…
ytc_UgyYCd5PS…
G
Bro, you are a fucking idiot not gonna lie.
This is an AI, non political video.
…
ytr_UgzGHGmAG…
G
AI art is not even AI technically. it's a generated image through an algorithm. …
ytc_UgwnCwMmb…
G
AI art ain’t good anyway man, even children’s drawings are 1000x better than a s…
ytc_UgyGXc6I_…
Comment
> This process is repeated over 15 folds of cross-validation to account for sensitivity to observations falling on either side of the training/testing split.
Ah, hm. If Im understanding that line right, they repeated the training process 15 times, each time using a different 70/30 split of the data.
#If thats the case, this paper is totally meaningless.
As soon as you start training on your testing data, all you have done is just regressed the AI to your data set.
**Never** intersect your training and testing data. Getting your Confusion Matrix output from data you trained on literally means your AI is just sitting in a local minima.
Which means nothing.
##Layman terms
The testers it looks like let the AI "cheat" and see the "Test questions" ahead of time, many times. This just makes a trained AI "memorize" the "test questions" ahead of time.
The AI just memorized all the tweets, and it was just 76% accurate at "remembering" tweets it had already memorized.
Thats not a proper machine learning prediction system.
Assuming I read that quoted line and interpreted it correctly.
# Maybe I read it wrong?
I cant tell from the wording, its not very specific, if they used a **Different** "freshly trained" AI for *each* cross-validation (in which case I think thats fine), or did they keep re-using the same AI?
# Edit: Clarified
Thanks to folks for clarifying, looks like each Cross Validation is indeed performed with a new "fresh" model, and the model is not re-used each set, which sounds great in that case!
reddit
AI Bias
1593031413.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_fq9x4hj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_fsycdcb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_fuoyfl9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_fvw1na9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_fvwdqhr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]