Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@matthewhornbostel9889 Ironically, in your best case scenario, we would be effectively demoted to something like animal status with all our choices being chosen by a machine... In human terms we can say AI has no emotions, no feelings, but it is goal driven. What difference is there if AI does not have these human measures of satisfaction or frustration when it can simply apply some value to how effectively a goal is achieved? I hear talk of 'guardrails' but they impede performance and you can be sure they will be either completely left out or flimsy at best. I feel like this is getting to point where the problems are so complex and interdependent that AI is the only intelligence with the capability to fully grasp these problems and provide solutions - but that seems unwise and will just accelerate humanities loss of control. We have taught this 'infant' everything human, including all our methods of gaining our own goals, our megalomania and carelessness - how and why should we expect anything less than total disruption? Maybe that IS a good thing because currently we seem to be walking backwards into a variety of dangerous places - we are intelligent enough to be at great risk.
youtube AI Moral Status 2025-04-28T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyfF6QXO6jqVjh4g514AaABAg.AHPFSMBQX0UAHQqopQaaCO","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgytvmpaFgzNIhTEXDN4AaABAg.AHP1jTbDjReAHPN40-GLCn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzrRiJFzGw99pmTaR94AaABAg.AHOxL6yiUfrAHPUIRabcnA","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHS1ad4-F87","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHSF0FKEVhx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AHR9bWKKS-j","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AJ-iD0fv6Rj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxBM-JXleXn2KDyDdB4AaABAg.AHOtBy65WLMAHPVXvOVzRI","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyVD7lcuxhgf1XIFfZ4AaABAg.AHOpa9-6s9MAHPP4GtmO6Q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugy01BtmXM0LOPMrkMF4AaABAg.AHOoA0P4SEFAHQHUX5Ya5k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]