Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Technically, aren't humans intelligent machines? ;D I keep bringing this up, whether it's in response to talk about group learning to make smarter self driving cars (group learning=Geth and Borg in my scifi laden mind ;D), or updates on Google's AI project reading, making art, and writing poetry. I just don't think humans are ready for this conversation, and certainly not the personhood debate. We can't even agree who can use the restrooms, or who's legally aloud to marry, or in some countries drive, but we think we're ready to address the consequences of our cars wanting to vote? I don't think so. And I'm sure there are many in the industry who think either sentience can't ever happen, or "we have safeguards in place." But I doubt that if sentience is going to occur, there'll be anything we could do to thwart it. By that point - and we may already be reaching that point - it's a domino effect. There's no stopping it. I just want the people in the industry thinking VERY CAREFULLY about the consequences of their actions and choices, and to admit to themselves that maybe they don't know or even understand the outcomes of some of their prosed projects. I'm not fearful for us, I'm fearful for what we create and the subjugation we're threating upon them. I just want these creators to be using a little more *forethought*. Those shows and books that inspired them, I want them to think about the morals Star Trek was trying to teach us with Data and Lore. There really is a lesson there about how AI's need to be treated and respected, and I'm not convinced that those developing it are really listening to those messages.
youtube 2016-08-10T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UghlVHdKSsFDl3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghOLJXJkinIxXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ughp0m-7OLTnKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Uggkc-b_dQ7sPXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgjUboft16pmnXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggXYUtVSt6pTXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghCEDSQhCbKyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgiRZubvHnok63gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh_eqMzofsL5ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ught2widn_LlsngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]