Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What disgusts me now after doing some research is how many private (possibly "shell") companies AND universities alike are contributing to the development of drones similar to these (Perdix drone, Dynetics X-61, etc.) as well as their intelligence systems. Like they know darn well what the D.O.D and DARPA intend to do with these: nefarious purposes just like the CIA and FBI in the name of "national security" for the last 80 decades. I think I might write an email, place a call, or make a physical visit to the campus of Texas A&M and asking why they are getting involved. Also, you guys should look more into the D.O.D's current loophole in how (semi)autonomous weapon systems must require "human judgement over the use of force." THANKFULLY, in 2018 a congressional research committee wrote a paper exploiting the fact that thus "human judgement" does not require "manual human control" of the weapon systems, but rather only broader human involvement in decisions of where, when, and why the weapon will be deployed. To me, I interpret that as saying it would be acceptable for an AI-system that "operates by human judgements" (and was coded by human judgement) to make these decisions with not a single person directly involved of carrying out the process, just begging one autonomous program to control another. AND WHO DECDICES WHETHER AN AI SYSTEM TRULY RUNS SIMILAR TO HUMAN JUDGEMENT?! like how is anybody ok with any of this...
youtube AI Harm Incident 2025-09-28T22:5… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQFSEL1NwAly6aV794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxgGW5iNt2VF9esAqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx-p_bQ6lHOeYcrQpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyxdy3YTxKrKVcFLoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9-_BX3vf0jCeKJkF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7jaXoNXp5kc97l0l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyvybHal9W56yW4yvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgywrRsa8cWMcRWlE4l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJOA4lT931neyRVzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw24sSvLNXQcv7gdZR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]