Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Y'all have forgotten ChatGPT hallucinates sometimes? Also that it's pretty good …
ytc_UgylEDZfR…
G
New in terms of 'they didn't release any updated versions of the old models, eve…
rdc_n7kgck1
G
Every websites customer service is ai and none of them can handle your basic que…
ytc_UgzyzopRd…
G
… a 4 step process that he created??? What? Those same step are part of every…
ytc_Ugzd0oUIM…
G
this is literally what i imagine when i write thank you and please as an appreci…
ytc_Ugw9YukO2…
G
If anyone knows anything about ML and AI its not foolproof. Anyone that takes th…
rdc_esq4wj1
G
Is difficult today to talk about that topic seriously. People who are against AI…
ytc_UgyZIVvU6…
G
Usage of AI in general is wishy-washy for sure but I don't see the difference be…
ytc_UgxG498bE…
Comment
I believe that robots should be given rights once they have feelings and consciousness. Of course where exactly that line is drawn is a fuzzy issue, but ethically I don't see another answer. Even before robots reach this threshold I still think we should error on the side of caution and still treat them with some basic rights. To make a comparison: I don't have a problem eating meat, but I think we should treat the animals we get that meat from ethically. Make their lives as happy and healthy as we can (within reason) and their deaths as quick and painless as possible. However, I'm also pessimistic enough that I don't think this will actually happen. At least not right away and when it does it'll probably happen for some arbitrary reason like the robot looks human enough that we develop strong feelings for them even if our toasters and cars have been sentient for years.
youtube
AI Moral Status
2017-02-23T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghNGSXNpzbcGXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Uggtymze_5vo33gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UghSU-uok-AsbXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjvzy_RZZ3t3XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjCywbdm-zgRngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgidTDRflZemungCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghmvQV9PnHxxXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjoIxwwsIHgF3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugjl7MMEYHckgXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugjqcni13UDG0XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]