Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i feel that he is such a smart man at what he does but should learn more about the programming that goes into machine learning: coming from a programming background i understand (not fully but have a decent understanding) of how AI's such as lamda work and it's not sentient it says "I fear being turned off" but it doesn't know what fear is, or what being turned off means it's just saying words that best fit next in the sentence. His concerns are valid for the future when we start merging AI's for example adding emotional variables that allows in a sense for AI to change it's responses based on emotional values and the ability to understand what turning off is rather than it being a word, so AI rights and sentience is something we should be talking about sooner rather than later but right now it's not a concern and i'm sure there's no public AI that has sentience or cares whether you turn it off or not
youtube AI Moral Status 2022-07-01T05:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy_9bBill0IcZGbpmp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYNnnoqR8VqgGtVrl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxgxe51gsRttXmAxlh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwRzAlvI1kzsvezEgZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugybg42LEprQKLx3xP94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]