Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the problem with the conversation around AI is that there is so much 'fanciful' sci fi nonsense that has proliferated in the public consciousness through entertainment media, and click driven online journalism. Eyeballs are valuable to both, balanced viewpoints are not as stimulating as sensationalism. So there is this tendency to anthropomorphize AI like Ultron or some Star Trek movie. This is not accurate, AI does not have emotions, feel pain or have the biological drives that we do, and which we imprint on our movie villains. A few years ago everyone was talking about a tipping point in AI, where it would cross some threshold of increasing power, or increasing intelligence. The upshot of which is that it becomes very powerful or too powerful to be gotten rid of? Sounds like a plot device not science. I'm more concerned with the kind of tasks we are giving our AI. The kind of skills they are honing. If a super AI would be smarter than humans, do we want to use AI in espionage or conventional warfare in the time period leading up to that tipping point. Do we want to use AI in securities trading for the same reason? After all an AI has no worry of crashing economies. It can have Asimovs laws programmed into it. But if the whole world's currency is devalued overnight, does that qualify as harm?
youtube AI Moral Status 2022-07-05T22:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwpTLFZ9mJDZ4g8b_R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz8C3bWcgkEbIilboZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxBQg_hO1QV4ypU06x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx84ldVWwF2XLmnaqB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyp4fKuJXdbA0jfAXB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]