Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"before thinking rationally and with empathy " I think if we thought rationally we would have to first question the basic premise of AI consciousness. First, we don't even know the first thing about human consciousness. Human brains are chemical based. Computers are not. So the first question is, since you can't pour chemicals into your computer, what would its consciousness be made of? They tell us silicon. OK, let's go with that. What proof is there silicon can substitute for the complex composition of chemical interactions in the brain? Has anyone demonstrated a silicon analog to chemicals? Even if possible in theory, is there a blueprint how to arrange silicon to mimic consciousness? Do we even know the first thing what we are talking about? I'm not denying the possibility of AI consciousness in the very distant future. But in the present context, and without any tangible demonstration, it's pure speculation. It's not even that. It's pure fantasy. As with so many topics, people think the hard work is already understood. But it's not. The going theory is that speed = consciousness. The more computations per second = self-awareness. That's pure speculation, at best. A non-sequitur at worst. So why is this fantastical and even nonsensical proposition treated as practically inevitable? cheers
youtube AI Moral Status 2023-05-14T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzwTRaYUjVnwDc3lsN4AaABAg.9up5lwPGrjs9vVH-zBG6Ud","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzEFg39EsPysHNc5St4AaABAg.9rAMnTyDjNgA3uKRdIO6eJ","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxQccf-8B4TUrqGyMl4AaABAg.9pWVj0tUQ659q14WXqm5ph","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw-oRlWGSYn-QvXvJd4AaABAg.9pR78enMEBd9pftDObyFG9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyGhxmohp6NLWlX-y14AaABAg.9p2EKfuLRIi9psFg5fP4NB","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwzrK8GJXk6qhNKWaF4AaABAg.9l0fzL_YZ989xPLp1Fc2bg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzuR-nEuxysUKMP7Z54AaABAg.9kcCNArsIT-9keuNEBXZIp","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_UgyFIflQZQnAm-bPG2R4AaABAg.9jyqnTNMsU_9o7cFF4aHkU","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytr_UgwwSRK1vI-yuESCN7V4AaABAg.9jlOW_XL-Z49kN3L6nup0c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzzUkbf2TBm2KpnI9R4AaABAg.9jlDr304qci9o7b8IS77hO","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]