Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think personhood is the wrong question. I understand corporations have been given some aspect of personhood and corporations are also entities run and controlled by humans. In many ways, rulings that give aspects of personhood to corporations, such as Citizens United, have been harmful to human agency and society. I think it’s critical to maintain humanity, human rights and personhood as a distinct category. AI agents are very intelligent in some ways, and they are not humans and should not be given the rights of humans. I do think we need to be very thoughtful and careful with our digital AI creations and the capabilities that we give them. Just because we can give them certain powers and abilities, doesn’t mean that we should. Do you all think it is wise to create super smart digital AI machines that mimic human behavior and human emotions? I don’t think it is. Way more money time in research needs to be put into how to create AI that is understandable and transparent and fully aligned with human values. We need AI to be in service of uplifting humanity, not created to replace humanity. This is the important question: How do we ensure advanced AI uplifts humanity, reinforces the best of human values, and strengthens human agency?
youtube 2026-02-07T16:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyNMOe8DZZbrW2ojXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzjnAL_z7cnZO9h_Xd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwZ6oP1XKBi6g6Yo0V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxi5qNM_fiCDh9xR-V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw2L9_kkxrCAOvNcU54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwfAR84dXk5u_y8dst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzI80SYjPu8KgTFEex4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgybpUSpNGcdKJ98QoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwrv3mbXkzVdlQ5klt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzRosrgxQZdsV6tRI54AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"} ]