Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Before true sentient artificial intelligence happens, many scientists predict that something equally world-changing will likely happen first: Artificially-Enhanced Human Intelligence. Many speculate that technology will probably enhance human intelligence long before we can create a truly-sentient artificial intelligence. This is because we're still not 100% sure how to properly classify intelligence or sentience despite knowing it exists and perceiving it. It would be much more difficult to create a sentient mind from scratch than to take an already existing sentient mind (AKA, a human mind) and just giving it a boost or upgrade. So before we have to worry about human rights, we're first going to have to deal with the ethical issues surround human augmentation, and all the economic, social, political, and business issues that come with it.
youtube AI Moral Status 2017-03-27T05:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghrtkIaEYufGXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghCvNhEHN-AcngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiOBS6RkHXMSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgjwY2J2WgKWg3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggUC5VN_TTCq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz6otA0YsK1H_oU8AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyIkqhIIjdH2ymktNx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_WZU3jCe3MYPLU4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxAkrhfdggp5M7Mml14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1Tu2KEOx1r2EjJj54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"} ]