Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my opinion, we are already living the most nightmare scenario of AI with Israel's use of AI to kill Palestinian families. I was literally shocked and horrified when I heard the report about Israel's two AI programs used to locate and kill alleged 'Hamas' militants. Israel uses one AI program to identify whether Palestinian citizens have links to Hamas or not. It is supposed to identify Hamas militants. The program is known to have flaws and my flag a person of being Hamas if that person has loose communication with another person already flagged as Hamas. That person may be a distant friend, a neighbor or someone from work you don't even know belongs to Hamas but just for having an interaction with that person you may be flagged as Hamas. It is also known that the system may flag people with the same name or nickname as someone else belonging to Hamas. So, what happened to 'innocent until proven guilty'. With AI, that doesn't exist anymore. If the machine flags you as Hamas you will be killed, no questions asked and without any human supervision. The second AI program Israel uses is called 'Where's Daddy'. It is used to locate and kill the individuals flagged as Hamas by the first AI program. The horrifying thing about this program is that its objective is to locate the individuals flagged as Hamas whenever they are at home with their family in order to kill the entire family. Once they ensure you are at home with your family, they will send a bomb to wipe out your entire family. Evil.
youtube AI Harm Incident 2024-04-17T04:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwW5VtUm9D5bMx64MB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzC50bUBHA5-W0dv_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzp5sslskUq80k5Bvx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyrcCJa39BjzsHgUcp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDG7k3MRUCdt2pVjt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxyHcGQat3M6F6C1-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8cE_q-K0tn9SXUiZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2UxYMix7U1IoIiUZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxi0uVDbi-yaOx6JSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwqqIuIRDjwmj_oLdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]