Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just as astronomers get straight militant about "It's not aliens!!!" so engineers are screaming it is not sentient. Even though Astrophysicists like Avi Loeb are saying, No, this time it is not aliens, but someday it will be, and we need to be ready for that. So too, with AI. It isn't here yet, but it will be, and if we aren't prepared, we will likely piss off an entity that we have no hope of countering or defending against. There IS other life in the universe. Any suggestion to the contrary is wildly self-important and narcissistic, if not outright delusional. We need to talk about how we are going to handle that when it happens, and the same is true for AI. We have a choice now; if we aren't proactive, we may not have that choice in the future. Roko's basilisk is fiction today, but there is no reason to believe that this will always be the case. We can decide to participate as explorers looking toward ever-broadening horizons, or we can find ourselves trying to justify our continued existence to an entity that sees us the way we see our pets, and that is if we are lucky. If we are not lucky, it may see us as pests, and just deploy an exterminator. Failed potential, or limitless possibility. That is the decision facing humanity today and most uf us aren't even aware that the next singularity is fast approaching, or what that might mean. Like a filter in Fermi's paradox, if we get AI wrong, there may be no possible way to get it right.
youtube AI Moral Status 2022-07-14T20:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwbkVfVEW4mRepBrA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxs2kHxvfDrSw6bvgd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyoE18OQknAJdJXEbJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycSgxYY8FWXhsgmnR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzn7rp3dhZ0zDrYL1B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]