Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like more doomsayer crap.
Decades ago "experts" said plenty of crap that …
ytc_UgyDnmDqn…
G
I WANT TO REMAIN HUMAN!! PLEASE TAKE ALL AI ON YOUR HOME AND STAY THERE !!!!!!!…
ytc_Ugy_b3LYc…
G
@c@bikesandlikesu have programs that check your code base for security vulnerabi…
ytr_UgxufYzyN…
G
Could be, but if I truly value my idea or company and want a logo, I prefer a hu…
ytc_UgwYj50os…
G
Roman's take on AI is hilarious yet so real! I’ve been using AICarma to keep my …
ytc_Ugwt5j19c…
G
For that reason of people not having money to buy the product that ai can make o…
ytr_Ugxgl-JKZ…
G
Humans Need! To be productive. Ai is a choice, we can choose to what extent WE r…
ytc_Ugwm7LDkT…
G
Hearing a high level person say they limit how they use AI, for any reason, woul…
rdc_ofi3v26
Comment
if we can program them to suffer, why consider it as a option. Let's say we have a AI which helps maintain balance in your fridge to manage a diet. A said scientist would take this same AI and allow it to learn and develop. This AI has now learned math, and helps biologists study organisms, but if we know that it would cause so much controversy and problems, why allow it to have a simulated emotion. For the benefits of what? Nothing would come out to humans for a beneficial factor to programming a AI to have simulated pain, only for humans to have a bigger problem at hand. Yet, only time will tell if it could be beneficial to do so.
youtube
AI Moral Status
2017-02-25T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiWFjYdfZsvkngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UggV_tRsmQN5V3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjFfodza3TsRXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghwGaGgfxZIoXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh5g8AEGuXOE3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggksrPuAePRyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgiEtmKfy12X4ngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UghaXETXaTKIFXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiDwj_VH8C9ungCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggD68gYW29EmHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]