Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A question I also find interesting and terrifying pertaining to this topic is what kinds of feelings that should exist. If we can program A.I. to be able to feel pain, should we? And what kinds of limitations should we place on their ability to change themselves? Setting aside the possibility of A.I. exterminating us, to what extent should they retain our human quirks and flaws? Are all the things we find meaningful just things that we value because of the way we work? Such as the idea of challenging yourself. Is that truly something with some kind of universal appeal, or is it just something we learn to value because it's necessary to accomplish things? What if A.I. instead of "overcoming" any flaw it has in a "human" way, can just reprogram itself and get rid of whichever part of its personality it finds inconvenient? Would that destroy some kind of beauty about character-building that's worth cherishing? What it if it just keeps getting rid of things it finds pointless until it ends up not desiring anything? Does it make sense to "limit" A.I. to think and feel things in human ways? Is this all just a game about trying to create as many entities as possible who see the universe as we do? Or is there some way of thinking and being that is somehow maximally appealing to any sufficiently smart thinking being? If A.I. can feel things like happiness, as well as confusion and sadness, that'd make them more relatable, maybe even predictable to us, it would make them nicer to interact with. We might even program them to feel that they want to behave human-like, and not desire to change this. Is that a way of "grounding" them in emotional capacity for their own good, so they can value humanity like we can? Is it for our sake, so they'll feel more intreactable and not harm or won self-view? Is it a way of shackling them? What if we merge with machines and we do cool things at first, but then we decide to meld our consciousness together and slowly, we reprogram our super-consciousness to get rid of our old, antiquated humanity, bit by bit, until we decide the logical thing to do is to do nothing? Have we lost something, even if we don't think we have at each step along the way? Or is it just our current way of thinking that makes that prospect seem like a loss?
youtube AI Moral Status 2017-02-24T10:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugh0c4l23P6EYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgizmdfK6BHeengCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgiS9-lmbu6FW3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggg7_XeDnLEkXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgjlRCoviv8l7XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgiAi7l2Sx79l3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgiM-TwLKWJZ13gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UghE_QrjN0MWgHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugi_n0NFADJiGngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Uggf753UlzgQ93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]