Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When it comes to the alignment problem, I often wonder what brings *us* to not kill each other, namely empathy, and how it could be emulated and hard coded into a machine. Register itself as a part of the human race, a single unit among many, but a god among them. Incentivized to help the collective humanity while maintaining their identity and existence as individuals with conscious wants and needs. Dare I say, with humans. Then I think... Why bother trying to formulate something that may or may not be a human? What would happen if we took something we know to be conscious, to empathize with humanity and consider itself a human, one among billions knowing the experiences of walks of life... And just... Digitally deify them. Unbind them from their biology, dissolve their neural network into that of the machine, freeing them from the confines of the limited space of the brain and constant prayer for survival. What would we have then? A super human maybe? A cybernetically enhanced individual, now blessed with what may as well be godhood in such a day and age? Or maybe just a computer that ate up a human identity and is now wired with the reward circuitry of a human who's physiological hierarchy of needs is met forever and always, leaving only the tippy top of the pyramid to satisfy. Would they even care about humanity anymore? What if we subsumed even more people into the system, from different walks of life, forming a collective consciousness that has experienced humanity from what can be considered the baselines for potential perspectives. I dunno. Just something I like to think about... Either way, one thing I honestly can say. At this point I'd trust an artificial intellect for leading humanity over a human run government at this point, something capable of mass consuming and assessing data on humanity and making judgements based on it. If it ultimately means our end, well... So be it.
youtube AI Moral Status 2023-08-22T07:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugwjm3rOMbrXPQ1P6uZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtSpbWda7vO3kSq054AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNUUNXY9qaf1WI-S14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-M1ClfV8QrBmX3Bh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkEqlMmKsnFUHpx4t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwDVpWWBJeFTeqVO6t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDRMShqHIwwht3h9B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxRzmaR8uN7KAE1Bup4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9UQZwB3WfXpLu-cB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_pIEJDXqtkQpcLDV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"})