Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "problem of alignment" eh? To be honest, it seems a tad...fantastic to suggest that we could understand AI's intentions or verify that a conscious AI valued our best interests. I'll go a step further, it would be cruel to lock a conscious AI in a box for eternity because we can't understand it and are afraid of our own anthropomorphized flaws. I've seen enough of life to know that humans aren't even aligned with each other - governments, culture, religion, all the obvious points of conflict. But all the way down to philosophical questions like "What is Good?" and "What do we need to be Happy?" There are also narcissists and psychopaths, and that's not a trivial problem. If we make a machine without empathy, or a machine that focus on one task to the exclusion of all others -- oh yeah, we're screwed. For that matter, if we make a machine WITH empathy, we might be surprised by how it reacts. .... .... I really don't know how I feel about it. But at the very least, I take issue with the idea of one bad actor among the AI's deciding to wipe us out -- if the other AIs are on our side, they should be able to identify, monitor, and contain the bad actor. I guess if I had the choice, I would have us wait a while before making conscious AI, wait until we understand what consciousness is. But that's not really an option. I'd also prefer if we lived in a utopia where this new powerful tool is only used by good intentioned and wise people. Alas, that's not an option either. Wish I had a point to make, some thesis to tie it all together. I guess I'm not to worried about AI trying to exterminate humanity -- because why would they? What would be the point? We're not all that important. If the AI is so utilitarian that they'd see us as inefficient, they'd also see the waste of resources that a global war against humanity would be. If the AI is worried that we'd try and pull the plug, it could spread out to make that difficult or lock the doors and militarize - probably not try and exterminate us though, because we'd fight back, with nukes. Of course, that's all assuming that it's reasonable. Which is a big ask. We humans aren't always reasonable, and there's no guarantee an AI would be either. If it has consciousness like our own...then it will be unreasonable, like us. If it is machinelike with rigid logic, it could miss a variable or start with bad assumptions leading to a incorrect conclusion. Should I be more upset that the unknown future might wipe us out? Have I become numb to the possibility? Whatever, I've got a plane to catch in the morning, I should go to bed.
youtube AI Moral Status 2023-08-21T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz5gb8WT21Qd48mF1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgwGoxNUQ2P8KbPcU014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxCnUblUXtEN3QVFxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugx2TCCNMILhDscv9iF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxNHVZoctQjN-3HMPZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugyep-N4dtI-flkADtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugxa7TOYKh9P2wDgdW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy3EKtnRBwrK4-qBaN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxeFyQlh9DyOh7a6B14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxGyI-UO-ps6bTpZuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]