Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution to this problem is to design artificial intelligence to not have all these problems that humans have. If we eventually feel that it is morally right to have to give rights to robots, it would be our own fault. Robots aren't developing on their own -- not yet, anyway. And it they eventually got to that point, it would still be of our doing.
youtube AI Moral Status 2017-02-23T21:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjBoANA8X9zwHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjuMFHV5lsuAXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugi7-SOHErfTIHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugjf9Lz_MTsGtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgiZLmAre-z33HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggO9fPB2zIlWXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghV8ewtgA-y-XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UghHX873VKpfP3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghPmuks3pH593gCoAEC","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghaLasSo9s-M3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]