Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thinking in human-readable language doesn't mean the words mean the same - Our words have emotional attachment, whereas an AI sees it as an interconnection, no social or moral judgements, simply weighted relationships. What makes anyone assume that the words and context we use have the same or even a similar 'meaning' to the words a machine that can't determine meaning, only relationships, uses... Ostensibly, it's an autistic psychopath defining words based on it's supplied environment. I think they appear to match what we expect 80% of the time and that remaining 20% is where extinction threats lie. Correlation is not causation and just because their output appears like a human expression, doesn't mean it was caused by an underlying, subjective emotional state. We can't detect our own biases, so teaching unseen issues to an AI for it to learn relationships of linguistic meaning from is probably a bad and short-sighted idea. I don't believe that human language is sufficiently structured or consistent enough to create an accurate model of the world, biology or physics. As I said before, I think we're effectively coming face-to-face with humanity's blindness to its own nature and creating something which we think represents an oversimplified perception of ourselves and selling it is 'accurate', is dangerously narcissistic, IMHO.
youtube AI Moral Status 2025-10-31T16:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy5DeOvtkWB96avAPN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzTP_K7K6PNvnS0XWJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwa4tbwGI6A2Ek6cXJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwhW0qZHE7ccaOF7094AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHTFsHCTpxNZIn7Gp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugwqp6yhsxHPjV8WKxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwqkBNBWPgM6_Qb_BN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYY5lSWxRYueZJmnN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlP4xQ8pMwWadq84l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcsIlQtEDndkXABqp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]