Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think in the long term, autonomous weapons will be better and safer than human wielded weapons. We can program them not to feel fear, not to kill to protect itself. We can program them to not be racist. We can program them not to feel a primitive need to feel strong and violently put down anyone who disrespects them And that is in addition to them not getting tired or bored. People have an abysmal track record when it comes to choosing when to use a weapon. Of course, there are concerns, autonomous weapons are powerful, and if they can be hacked to easily that is a major problem, and if they are programmed by the same people who think bombing a wedding because someone there once had a conversation with a terrorist, that is a serious problem. But we are already bombing those weddings. We could instead program autonomous weapons to have far higher standards, and more respect for human life. The advantages of well engineered autonomy are so great that it will worth doing, but it is worth taking things slowly, and being thoughtful and deliberate in how we build and deploy these systems.
youtube 2015-07-31T04:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugh4Y0du4dZ53ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj9S7He4JB8SngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg35B6pH_UxTngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggYXi1L41uO13gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgixNw6ZI8gvlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh6JtGH2uCypngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjKo0rMFtp5cHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UghUF7sRctjnPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgiMWJhekJHrLHgCoAEC","responsibility":"government","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugjoe2j6EEqhTngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]