Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@A1Authority So? If you're on the better weed then you tell me. How is it bad to ask for the consent of an artificial "intelligence"? It may well be sentient or not, that doesn't matter. But it is intelligent nonetheless. So the speaker is basically saying we should ask for the consent with a highly trained programme that we call A.I. That's another way of training the programme by making it ask for the consent before any experiment is conducted on it. To me at least that is profound. I never thought of it in that way. He went there after talking about "AI colonialism" and "end of culture" as we know it. Philosophical? Certainly. Scientific? Pretty unlikely at present. Impossible? No friggin' way, given the speed of its advancement.
youtube AI Moral Status 2022-07-24T05:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyYQ-mtLbwEAK87XTl4AaABAg.9dSS38eu94l9dYjRY3ITsS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyYQ-mtLbwEAK87XTl4AaABAg.9dSS38eu94l9eFhwc8qoTX","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugya3tqbuwEHOARz0IV4AaABAg.9dSBBrMp0Pp9dSLIfr3H-h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzw8Zh1uuj4MJf-c3l4AaABAg.9dRfjg27vtR9dqchK8rPRc","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyD9liWXv9UFIVG7GR4AaABAg.9dRKUBQibrC9dr2y4ac6cd","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwKMSN9LkVtDODvnS54AaABAg.9dQvsuYgiyt9dR2p94XSPr","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwKMSN9LkVtDODvnS54AaABAg.9dQvsuYgiyt9dRsflDFBaO","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwOXYBiV-TpNSCfeLh4AaABAg.9dQfJa9BN_f9dQjLzH3dfp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz3xd6azCtoCA4jiWV4AaABAg.9dPwCtH0UPW9dQr3vptPb-","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytr_Ugz3xd6azCtoCA4jiWV4AaABAg.9dPwCtH0UPW9dR8tm9seCL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]