Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would have thought that autonomous driving wouldn't get rid of drivers. It wi…
ytc_UgywlWSnY…
G
Instead of seeing AI as a threat, we should see it as an opportunity: a truly in…
ytc_UgyU3G5Ow…
G
I mean we don't treat AI with caution the same way you don't treat flies with ca…
ytc_Ugxx2bURL…
G
if companies want ai todo all jobs then they should have ai as their customers…
ytc_UgwBJziow…
G
actually I saw a few things on the internet about how AI learns the imperfection…
ytc_UgzqOWWc6…
G
Today’s commercial AI can’t “have experiences”. Having an experience implies tha…
ytc_UgzzYTLEH…
G
@AsianDadEnergy I would argue that we cannot clearly see the distinction. We mig…
ytr_Ugxf2sQa9…
G
"Some neuroscientists believe that any sufficiently advanced system can generate…
ytc_Ugi_n0NFA…
Comment
What is the most important to keep in mind when thinking about true AI, is that morality is NOT objective - it's not a natural law that exists, its a subjective social construct that is itself a byproduct of human fear of mortality. You're afraid to die - and so you do everything to ensure your own survival, including persuading other potentially dangerous humans they shouldn't kill you, even if the reason is an absolute BS.
AI isn't moral because its reasoning is 100% pure logic. If you're a treat - you get eliminated. Not because they don't like you or have something against you, but because it is logical to eliminate treats. If you're useful to it - you get protected, because again, it is logical to keep things beneficial to yourself safe.
AI is NOT evil because it is not moral.
AI is NOT good because it is not moral.
AI is simply logical.
youtube
AI Harm Incident
2025-09-12T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwMVgt6TgFCsC1WZZ14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy0OsGMMbid-3tRXWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxugmAebC2Z5AnVx5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgywL5esLRoarTfNB_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkV4r1vGnJufgGk1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxhFAGt_vTie85jhUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTCRI7iw3sFMFPnqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxVxeHtkyXvVxWCMNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgystMO9xbYgtyLNq594AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzIY-h6xiRq3b5HY314AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]