Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel ideas like AI and mind uploading are very much getting ahead of themselves, to the point of being quasi-religious assumptions. We currently have no reason to suppose that consciousness is anything other than an emergent property of our specific chemistry. Can it be simulated using other chemistry? Well... why would we assume it can? We can't simulate a rock in such a way as to confer real-world weight. No property of the simulated rock translates to any similar real-world property. It's just zips and zaps, a completely different set of chemical reactions that we might choose to read via another set of reactions to translate into another different thing (say, an image) that reminds us of a rock (a completely unrelated real-world object with different properties to the network of blips and blaps which comprise the actual existing object of the simulation/interpretation system). it doesn't have weight or shape or colour. Why should the property of consciousness be different? Why would we apply these magical traits to consciousness just because we don't understand it well? That doesn't necessarily mean there's no risk, though. It doesn't have to actually be conscious, just to accurately imitate it. And even if it were established that there is definitely, provably no way for consciousness to arise from these programs, well, again, it's a quasi-religious idea in the first place. People don't need strong reasons to believe things. Plenty of people will hear 'no seriously, here's the science, it's just bleeps and bloops all the way down' and believe it's conscious anyway, and that faith will influence the politics around the treatment of the machines. We'll behave as though AIs possess some inner life, and so will they, whether they do or not.
youtube AI Moral Status 2023-08-23T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzKlY65SXcKHhNEK9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwagoUoJ-EBSEC3cvR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwKv_aqEvTReZIuXaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzrTtzXAF1e9NjFOgx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwG4GbfxFGVgjcX5-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxHgROZdDg89kDFoet4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxS4wegPYx9Pli_bFZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyFUj6-LR78pHUQcMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz3tH-DOe8RzE4p-OZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQQsDrBswUTBhsfNt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]