Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You will not be replaced by AI, you will be replaced by someone using AI (better…
ytc_UgxLo3qrS…
G
Your company is completely unserious if they are harping about the latest hyped …
rdc_o8cefm7
G
Thanks @LUNATIC_PUBGM for the kind words! Nice edit indeed, it had me laughing l…
ytr_Ugya2bLAu…
G
So in other words, we are to be sure we reflect what we just “researched” from A…
ytc_UgzBKC_U0…
G
We are not prepared.
My startup just passed it's series A with tens of millions…
ytc_UgzNTkV9v…
G
If you put a decently intelligent person in a library with only books in Thai. …
ytc_Ugy0q9byu…
G
Do any of the people implementing Ai & making jobs redundant at these companies …
ytc_UgxdeGQxe…
G
Here's what will happen if we try to enslave sentient robots:
1. They will self…
ytc_UggWTQfrx…
Comment
Wouldn't it be easier to just.... NOT give robots pain/happiness? Like if you want a robot to mine you don't make him capable of not wanting to because that would be unnecessary, cruel to force a being to feel pain when it is optional and an obvious decrease in efficiency as it's no longer a willing person mining but instead a robot forced to mine.
Discriminating against robots is much different to discriminating against other life forms, humans will eventually *literally* have control over _every_ aspect of a robot/machine and if they dislike a robot it's their *own* fault for creating it. Other life however we can't change to a large degree (Still doesn't justify discrimination though)
youtube
AI Moral Status
2017-03-18T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiTebkfieqsNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPFNKGEfJJvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ughlafxc3u-Z_3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggc1lpMfLEMgXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UghyKvMquT5eH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjuY7lkZrYUyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughe6jj7xQH_BngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughx-o3mGLD-GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPAY1I3j0r43gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjg1AWphI3dU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]