Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If a true learning AI with the power to rewrite it's own source code is developed, or an AI is modeled after human consciousness to the point that hardcoded/wired limits don't work (such as the 2045 initiative actually working), at that point we will need to address this. Otherwise, the only way pain will enter the robot equation is if the builders are sadists.
youtube AI Moral Status 2017-04-19T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghZxim60h8djXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjWFrifyXFxLngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg6_68H1uxBc3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggpPoRogRJoJngCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgiDTskh2rn2yHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjGaC_PeYYcrXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugg16N0dkIPH9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgizKQfBDOEQFXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg9hqGfomYCEngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgglqXCxOme6MXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]