Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What’s freaky to me is… am I a language model? I learned to say “hey how’s it going?” when I don’t actually care about how their life is going, because it’s popularly viewed as polite. If someone walks up to me and then I say “what do you want?” I could mean that in a sincere, non-confrontational way but it would come off as rude. I learned this behavior from the (I’m estimating, maybe poorly) hundreds of thousands of interactions I’ve had over my life. And I’ve learned a lot of other things about speech and conversation and socialization from reading, listening, and engaging in conversation. Is this not very similar to if not the same as what the engineers are doing with ChatGPT? And if so… is that what consciousness is? I have a consciousness and also free will (debatable, don’t want to get into it), is that what makes a person? Is the brain just a data replication model that happens to experience free will? What if we hooked up ChatGPT or some other kind of data model to a functioning body, some kind of synthetic skeletal form with musculature (I’m an idiot just throwing words together to get my point across), what if ChatGPT had access to the physical world? As it is now, it would only activate when prompted (I think), so does that still make it inhuman in a way? It could be argued that it is conscious, but it has no… life? If we can find a way to call that synthetic being alive, then is it differentiable from animals? Humans? I’m exhausted
youtube AI Moral Status 2024-10-31T02:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw2Zd3C09raYfketM14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzuQTtNtx3pb8x43Od4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVS0KEKKzd0cAx-GF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwGFqKSHNh2bhLIqHF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw6OEMWKeopjKU_Git4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGtj2Sw8L3aG7Rp_V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrXTa6vrJS0ExFjcN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-byW2ztnmcL9eS_h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwQ_NmjV0OflrJhdWR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgydPFRku2A2fpJ4j7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]