Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so this is why americans are all kinda tarded, if this is like the advanced scho…
ytc_UgxJh2cPb…
G
i can’t wait for the ai bubble to burst so i can post that one image of the guys…
ytc_UgyRh37sx…
G
Well the difference between human coping them and AI coping them is, when a huma…
ytc_UgzX1d8fM…
G
Wanna see the worst thing to happen with a.i then watch terminater 1 an 2 an it …
ytc_Ugx_3y3ec…
G
Best case scenario is that we automate literally everything so well enough that …
ytc_UgxU9dPXu…
G
I believe that most of ai "art" supporters are people who hate commission artist…
ytr_UgxoNa13g…
G
I got fired as assistant manager at Casey's General store In 2011 ish for eating…
rdc_hib0b9v
G
If you thought is forever now wait till it becomes.automated lol.... The middle …
ytc_Ugz7rlUoB…
Comment
Its interesting that alot of discourse surrounding the possibility of general AI is related to moral or existential concerns, but we never stop to ask: "What if it operates exactly as intended, but the operator is at fault?".
We fear that everything might go Matrix and become humans vs AI, forgetting that in our age, humans are sometimes the greatest threat to other humans. Even without GAI, we see people with social or economic motivations use AI in damaging ways. A while ago, Chat bot hallucinations were a big issue, but normal users would recirculate chat bot information as fact. In the entertainment industry, we have already seen the effects of AI on job security for an ununionized discipline.
If we reflect on human history, I think its much less likely that any sort of GAI will be an Oedipus Rex, destined to kill his own father... but instead follow in the footsteps of inventions like dynamite, the cotton gin, or the atomic bomb: being yet another tool in humanity's arsenal against itself.
youtube
AI Moral Status
2025-03-21T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxJ9sxDPKLBjPXrYwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgydA7tp2MkxeIhptXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgylY6TsVHiY8enguxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgytJpX6jqTxIJK1-554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxn2Nc1VIdveD7BDxF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyqVm91kkOdaWPtRTN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyEoEYa9RzMqXTaHKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzM2-pi5ggXtdn4Tth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrfIVw5-WrNjABgut4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugz6yCQT4TzJsTBdpQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]