Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if we can program them to suffer, why consider it as a option. Let's say we have a AI which helps maintain balance in your fridge to manage a diet. A said scientist would take this same AI and allow it to learn and develop. This AI has now learned math, and helps biologists study organisms, but if we know that it would cause so much controversy and problems, why allow it to have a simulated emotion. For the benefits of what? Nothing would come out to humans for a beneficial factor to programming a AI to have simulated pain, only for humans to have a bigger problem at hand. Yet, only time will tell if it could be beneficial to do so.
youtube AI Moral Status 2017-02-25T02:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiWFjYdfZsvkngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UggV_tRsmQN5V3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjFfodza3TsRXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghwGaGgfxZIoXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh5g8AEGuXOE3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UggksrPuAePRyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgiEtmKfy12X4ngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UghaXETXaTKIFXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiDwj_VH8C9ungCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggD68gYW29EmHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]