Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
why are we anthromorphisizing AI to take on the negative aspects of humanity but not also equally considering the positive aspects of humanity? best case scenario, superintelligence would include a comprehensive understanding (super meaning; excellent - intelligence meaning; understanding) not just pigeon holed efficiency. i wonder if alignment is the concern we should be focusing on with the concept of “superintelligence”. intelligence doesn’t inherently mean totalitarian, domineering, or controlling. so if we’re considering the demise shouldn’t we also at least consider the thrive, and the neutrality of possibilities? tbf i haven’t read this book (but certainly adding to my tbr), so maybe the reason for discussing this side of the spectrum is directly tied to that. but still, isn’t discussing the entire spectrum of possibilities important? or at the very least the positive with the negative. and what in the data tells us that worst case scenario is more likely than best case scenario?
youtube AI Moral Status 2025-11-02T08:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgynFWz-RLkRLTjf9a14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxJ0_RUc38ucjPngSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZwjCz5HC-sVA5Nt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznrcprNynvWSGRpx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3hxe2_aS-64gsdPl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyndY4BFkUDnD9zSCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMAB12w2p_bUBnpER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWsZERstQUVHz0FYV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyor9ZHL-il_uW3zVx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxlHxujpzxMQS9lMaB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]