Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Agreed, and yes there are many pitfalls to avoid indeed! But as you said, if the AI's interest are our own, I don't see this scenario happening. Now, if it had the planet's interest "at heart" then I would agree. But an empathic AI would take a look at what we're doing to our world and to each other as an unfortunate reality in the present, and then work to improve the lives of those who are forced to destroy the planet or other people in order to survive themselves: in order to prevent that from happening in the future. If an empathic AI just says "These people are incorrigible. The only way is to kill them all." then it ceases to be empathic, in my opinion.
youtube AI Moral Status 2022-07-01T20:2… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugwjxg6cznPzm-6i_eF4AaABAg.9cvGeh6XAWY9d61gBXfqxb","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy49zPvjcoeD1N9Dmx4AaABAg.9cvDc66ZmJv9cvE3BPfWrb","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cvyXcfip51","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9cwTwSDBHG7","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_Ugyr87f6i5M1TBk0xLx4AaABAg.9cvDAWM4g5S9czcZpjlwOK","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwqWcGd--G","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cwxNRYvbQf","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"hope"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxAEpaD38O","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxH5SAjs87","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugyov9AsdYCwL1CXawV4AaABAg.9cv8B9VXfP69cxQYQ9tVE7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]