Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm really surprised that all those stories about AI taking command never mentio…
ytc_UgxMwbK71…
G
I think that most radical Ai supporters are probably dopamine addicts that found…
ytc_UgxwKfQyO…
G
Where we need A.I. Is in Government. Humans have shown that personal interests …
rdc_i2u3be0
G
people who say ALL AI art has zero effort behind it skiped half the video
(jk bu…
ytc_UgzjrD_wG…
G
@kiitkaat05I swear, I have seen so many people instigate ChatGPT, and I feel sa…
ytr_Ugwj9u2Za…
G
So blind greed and fear will be our undoing; our "old reptilian brain" paves the…
ytc_Ugyf7NIut…
G
so glad that AI is taking over. better than these every time dissapointing, mise…
ytc_Ugw4NEP_d…
G
Being ignorant of history and aware only of census info and political correctnes…
ytc_UgweIMFyO…
Comment
To add to that excellent question: **Should human preference for anecdotal evidence rather than statistical evidence be built into AI, in hopes that it would mimic human behavior?**
Humans are pretty bad about judging risk, even when the statistics are known. Yet our civil society, our political system, and even our legal system frequently demand judgments contrary to actual risk analysis.
For example, it is much more dangerous to drive a child 5 miles to the store than to leave her in a parked car on a cloudy day for five minutes, yet the latter will get the Child Services involved (as happened to [Kim Brooks](http://www.salon.com/2014/06/03/the_day_i_left_my_son_in_the_car/) ).
So in this example, if there was an AI nanny, should it be programmed to take into account what **seems** dangerous to the people in that community, and not just what **is** dangerous?
reddit
AI Bias
1438003353.0
♥ 333
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cthpngw","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ctlpsgh","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthuvw9","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_cthz1rt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_cthnpuo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]