Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@omshree901 Calling them infants probably humanizes the AIs far too much. The unconscious processes that humans use are completely disposable components of our mental makeup that are regularly cleaned up and replaced. Failing to improve your processes leads to a drop in neuroplasticity and an inability to acquire new skills. You are, as we like to say colloquially, "set in your ways". You're not going to understand these things by researching AI. The AI field is focused on how to make better tools out of what we have and has lost a taste for the "general AI" problem. I'm sure it will come back around eventually (these things tend to go in cycles) but you're looking in the wrong places if you want to understand how an AI compares to a human. David Eagleman is a leader in the field of neuroscience and someone who has been willing to share the academic findings of the field with the general populace. He has written a number of books on the topic which are easy to read and understand. He uses a lot of real-world examples to demonstrate his points which makes the concepts easy to consume. I highly recommend starting with his book "Incognito", which is a fantastic introduction to the topic. Plus, you'll get to look like a crazy man when you run around telling everyone how interesting Chicken Sexing is. ;-)
youtube AI Moral Status 2022-07-02T17:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugwjxg6cznPzm-6i_eF4AaABAg.9cvGeh6XAWY9d61gBXfqxb","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy49zPvjcoeD1N9Dmx4AaABAg.9cvDc66ZmJv9cvE3BPfWrb","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cvyXcfip51","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cwTwSDBHG7","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9czcZpjlwOK","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwqWcGd--G","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwxNRYvbQf","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"hope"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxAEpaD38O","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxH5SAjs87","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxQYQ9tVE7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]