Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Being against autonomous weapons is fine. However, these guys have been fearmongering about AI for a very long time. Now, Elon Musk and Stephen Hawking _are_ obviously very intelligent, but they don't have any fucking relevant experience in the field, and Steve Wozniak sure as fuck doesn't either, and further, *these people are not infallible*. Don't just blindly make an "appeal to authority", look at what they are actually saying about AI. Now look at what the leading minds *in the actual field* are saying. It's like a chemist making claims about the field of biology... yeah, he's smart, but does he know what he's talking about? When it comes to talking about AI _overall_, there's a lot of rampant speculation and assumptions being made. At least in this case, they are spot on. Completely autonomous weapons is a bad idea. By the way, we are *incredibly* far away from having true Artificial General Intelligence. It's the *people* you have to worry about. These autonomous machines *don't* have fucking feelings or motivations, and they are *not* going to randomly decide to "destroy all humans".
youtube 2015-07-30T04:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UghFMR-o-KZsRHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggAVBq5iJ1i43gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UghLACWF_x1wyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiwr3f7ga7jtXgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj6PDyAmJ7aLXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjKsQW0N7bzF3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugg1b17BbcoJLHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugj7UUUQfs0ErHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UghQIAH0cc0IXXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiKYwCM4-FcaHgCoAEC","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"} ]