Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Israeli military says it's using artificial intelligence to select many of targets in real-time. The military claims that the AI system, named "the Gospel," has helped it to rapidly identify enemy combatants and equipment, while reducing civilian casualties. But critics warn the system is unproven at best — and at worst, providing a technological justification for the killing of thousands of Palestinian civilians. The pace is astonishing: In the wake of the attacks by Hamas-led militants on October 7, Israeli forces have struck more than 22,000 targets inside Gaza, a small strip of land along the Mediterranean coast. "It appears to be an attack aimed at maximum devastation of the Gaza Strip," says Lucy Suchman, an anthropologist and professor emeritus at Lancaster University in England who studies military technology. If the AI system is really working as claimed by Israel's military, "how do you explain that?" she asks. Other experts question whether any AI can take on a job as consequential as targeting humans on the battlefield. "AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy, and safety," warns Heidy Khlaaf, Engineering Director of AI Assurance at Trail of Bits, a technology security firm.
youtube AI Governance 2025-06-19T20:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxYvMUTFQ2XmoWk9Bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwmMT_nwiONEluPdSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwIWNwLH_X2HuLSPll4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwMVNhw03KpoHqkDHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgycLoa8Ct1lJE5bRjt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyppQ81apmWUE1ZI5R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyRbVspU_QKaiMxQV54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxBb3HjyJj24OhmoiZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxEDdkY9CTWhm453jl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx2uFuKNaDY3J_P2Dl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]