Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if adding so many of these is putting extra strain on the grid, and cou…
ytc_Ugxgp9Pz3…
G
But for an atgm the operator need visual first but this drone is using AI 🤙🏾…
ytr_UgzkhRKDs…
G
I think everyone needs to also stop treating actual AI like fictional AI. Actual…
ytc_UgxVF37vG…
G
@phighter no one's telling them to stop. i'm just saying..i'm tired of artists …
ytr_UgyFhwAZL…
G
AI WILL TAKE OVER THE WORLD! 1!1!
AI: was told to make a pancake
AI: made pancak…
ytc_Ugwvi5Hs5…
G
Look, I hate to jump into this discussion here but I think I'm starting to get f…
ytc_Ugy7HdtIm…
G
You know the ai was right just a shooting from the police more than anything els…
ytc_Ugyn_Hk6F…
G
(Im extremely late but who cares?) This is so frustrating, just to think ive bee…
ytc_UgzsyMLIG…
Comment
Well then you program it to not care for such things as itself. We've already reprogrammed animals (domestication) to not care for such things. If you take a wolf from the wild and demand it to do a dog's work, then that's inhumane. If you take a dog which we've bred to do the task that we now demand of a dog, then that's humane. We've bred cattle to not worry about what vegans or Peta think they care about, cattle aren't depressed or upset, we've bred (programmed) them to be content with the lives we give them. Same will apply to robots. Although, if we replicate human intelligence in AI then enslave the machine, that's inhumane
youtube
AI Moral Status
2017-03-11T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgihQiqZZ6JUtngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggCdskvXvNx-HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjSLngEyU8yhngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugj4vS6AR6pp2HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugh_lFikQJi-dHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghiE6mj80ENY3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjLYJhHPMsUEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg9uEuu-2tWY3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UghDOVqB_cYCqXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggEj0A2BFEXxXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]