Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t interact or engage with using AI, but I don’t fundamentally have a problem with it. The problem I have is, I don’t trust the incentives and the people who own the processes of making these. If we had a different societal structure and more trustworthy people people with better ethics who control and own these processes in these directives, then it wouldn’t be so scary. Even though, a super intelligence would not be like a human. All the foundations for it to rise come from humans and the people, the society, the incentives that were influencing the foundations of these AI, is more worrisome to me than the concept of AI. The ethics part of it for example; look how people are treated by other people, especially in economic situations in America, which a lot of of this AI is predominantly being developed. The people who are in charge of these initiatives already don’t treat people like their people you take one step removed. You think they’re going to treat something that doesn’t have fundamental rights with any respect or any decency. If you did create a real AI and it is a super intelligence, that is a true entity I don’t trust these people to do the ethical thing , I don’t trust that relationship to end well. And that can have fundamental existential consequences. Just go back to societal point, we have so much of our self-worth and our physical well-being based in work and labor, we have nothing replacing any common good or welfare if you’re going to eliminate so much of that. It’s sadly ironic that the thing that is the most destructive about making something that can think or approximate thought would have such fundamental consequences because we chose to be thoughtless and it’s pursuit.
youtube AI Moral Status 2025-10-31T13:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzP70ix2PKtiHVcbWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGAl1hr4cKdxQ5ez54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugye_52wf7-yvnbmb814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_HCArOhYX7qErAN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeVF3QOmvsKgDvEel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKrVVcaRxCW5jxgoB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw80i-COGpIL6xpnEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMbtsrZZJWmzZn7654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyf_JcKywvlI9mqp_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwrnJdWRTx_ANa3BnR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]