Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We should not give A.I. enough intelligence to think, but someone's going to do it anyway. If it does happen, we shouldn't be allowed to tamper with it unless it physically threatens a human. We don't want to lobotomize a robot because it offended you. #RememberTay
youtube AI Moral Status 2017-02-24T06:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Uggbtq-WGdMdsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UggAjot1l7w9IngCoAEC","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggEmH3Lq4V_vHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ughlh2BiQzNAdXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugg0tBq-Ha2NR3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghbXQbC6Eut-HgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgiaJXOE27QNsXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UggWMgkXXwlosXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggATgq0eeHyfXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UghF5eT9DDh8F3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]