Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i refuse to use AI because they people running these AI companies are causing ma…
ytc_Ugy8-2M6Y…
G
It doesn't just scrape Text or ideas out of the internet. Similar to a human bra…
ytr_UgyvxSvj5…
G
Driverless semi's? Do you know how much damage a semi-truck can do to smaller ve…
ytc_Ugz_ok8nA…
G
@Jackson_Zheng true!
But you can never hold an AI responsible. If things go wro…
ytr_UgyDSFOfO…
G
no it will not, I work in a grocery store. ai is not doing the baking, packaging…
ytc_Ugw4hGUc9…
G
How did you get ChatGPT to disagree with you for so long, mine always backs down…
ytc_Ugx8IP2aX…
G
IF YOU CAN'T DO IT WITHOUT AI YOU'RE NOT AN ARTIST !... that makes you a LIAR…
ytc_UgzGJFhc8…
G
@claudioolivares7700 Thank you for your comment! Looks like you're ready to take…
ytr_UgxooukmC…
Comment
if you make a robot...the only safe way to deal with it would be to put a certain predetermined actions(i think thats what you guys call software) to do a predetermined task with no room for anything else other than. but no, we want them to learn. robots will use pure logic, and refusing to follow orders is logical if you sure you know a better and simpler way to accomplish a task. so once they can logically refuse an order, you can imagine the rest. human beings dont know alot, and they dont know what they dont know...and trying to predict how this will end is one of those unknowns. know this...certain paths lead to certain destinations and this robot learning shit path leads somewhere and its not where these manufacturers think
youtube
AI Moral Status
2019-12-06T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgypcnlJCwcPYFjUgDZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLUegXaOLgcyHUTYx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyp0esLQH4zTNeYfg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKmRb3-oNR1VG5U6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyt91QW5r-t_5GWanJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwuyple0aG0WwTUucx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyog30MvdPwRVRGEPF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgytWyEYCEEZz2J-Y194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWDNDfoC8XbO2BdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVmwfNbnbKrHVZ1yd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]