Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes and no. If it ever gets the point that AI have human level intelligence, emotion, and sense of self preservation, than yes. But frankly, we should probably avoid creating that in the first place, if the interests of a self-aware AI is ever benefitted by nuking the planet, we're all royally fucked. As cool as my boy Zenyatta is, we'd likely reach HAL 9000 long before we ever reached him.
youtube AI Moral Status 2017-02-24T01:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ughl6WSLm9wCB3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgjpW_cqqeU343gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgiYhlUpCB2i23gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugi0N_B54KvacngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgjGkGMrvCMT_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj8xpx1PUjL6XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghP6IRxjakkx3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugi2WXL0T1TMH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjCRASqFFZCF3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiPBTwclustlXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]