Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are saying the judgement of the human is the issue. Actually it's just the accuracy. Human soldiers make mistakes all the time, the military has an entire Justice system to evaluate their choices and responsibility. And sometimes it's not a question of a mistake but just the reasonableness of choosing one thing versus another in the moment. If an autonomous system was found to make those choices more accurately, then your hallowing of the human conscience goes away. It would be irresponsible NOT to equip our weapons with these systems as safety interlocks. We already know that it is irresponsible to use forward observers anywhere where a drone can do a similar job. Is your complaint that machines make different kinds of mistakes than humans and we're used to the ones that humans make? Or that we can figure out why the machine made a mistake, whereas we can never be sure with our soldiers? That seems to be a point in favor of the machines. Self-driving vehicles have gotten us off to a bad start because this is just about the hardest task you can hand to an autonomous system (brain surgery would be far more amenable). The initial intent was to demonstrate that self-driving cars were safer than humans and then the legislation would follow. Designers can be faulted for naivety but the reputation has stuck.
youtube 2026-03-11T03:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz7bxPrZ4ktDOixWyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxEzJElYC6nD5dQXql4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwuy7Lbxw3YDCO2Nj14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw5PAX1-qMY73lXbph4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxqP6yWxaMoI4gI9g14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4c5nu3GtJxxzL6C94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzj0utiZyrJ0AMN-8t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy2r56Rc9FWdFf6Kv94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzsT-3UIkdWaIEn2Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyUedzy5nSjUnuZr2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]