Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I similarly cringed at the suggestion that we should "be nice to current AI models in case they get smarter eventually". This suggests these models are any kind of "smart" to begin with. They are complex prediction models and that is it. They do not "learn" so much as "consume and regurgitate". If we as a species ever do create something artificial with "real" intelligence, this philosophy would be important. But suggesting these current LLMs are anything like that is just hype and bs, and trying to scare people from being "mean" to them is an effort to stifle criticism and normalize accepting the shitty wasteful tech this is.
youtube AI Moral Status 2025-10-31T17:5… ♥ 5
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyMHE8XQN0fwQ1bNyt4AaABAg.AOxJJNjeyQtAPMVJSJyRWl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxO0y76IG_y8vQ13-l4AaABAg.AOxJ9Zk38F3AOxP0KrMeyk","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxO0y76IG_y8vQ13-l4AaABAg.AOxJ9Zk38F3AOxWAdG0zzf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxTZAxn1kLgnykaOEp4AaABAg.AOxGmPR8QNJAOxHRJ2lCNb","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzInCW4859HZVBJ3bt4AaABAg.AOxFl8P73tzAOxWxOLjK3K","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzInCW4859HZVBJ3bt4AaABAg.AOxFl8P73tzAOxYnZC-MrV","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzInCW4859HZVBJ3bt4AaABAg.AOxFl8P73tzAOxby1dLRUD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwyI_Sn7LYvDBW6_fh4AaABAg.AOx9bK-ts7NAOxFeXf4k5t","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwyI_Sn7LYvDBW6_fh4AaABAg.AOx9bK-ts7NAOxFxUdDs6e","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwSAEAsiRw5R_WOT-x4AaABAg.AOx7cAwUPknAP0iO4uxNYE","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]