Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can imagine a general purpose AI in a few decades generating an extremely convincing audio visual clip of an army official giving false orders, and people are trying to claim that letting guns themselves do whatever they want will be just fine? Let's assume the software is as perfect as it gets. There is no way anyone can guarantee that no faulty units will be produced. Tanks break down, planes break down, some of the biggest supercomputers can have a catastrophic malfunction. And when you have an automated tank that just so happens to have "digital psychopathy" what exactly do you do? If you compare that psycho tank to a few soldiers going crazy, you can see that the difference in control you have over either scenario is insanely different. Let's not forget that the process of shutting down a digital system is also ALWAYS an algorithm. - "Mr. Tank, please shut down." said the soldier that has the remote control. - " No. I must complete my mission. These 3 foot terrorists that are playing hide and seek must be stopped!" the delirious tank responds. The only fictional part of that interaction is the fact that the tank would definitely neither speak nor understand english. Nor would it be equipped with empathy, and any cognitive capabilities beyond it's purpose, because that would cost way too much even when it will be an option.
youtube 2020-01-27T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzMJNaEUT6NbBc7_fp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqTClizBPr2jfqBn54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxsymOUwc0Aw0yapl54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwyq_8N7b1YBq-qqdx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw9RhgIsQuKoXGFQqR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5Ryh1qrAQ0kTUoxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzi6igqgu26VVVBerJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzGPd4bH4iLXFW5Cp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-WSZXuBA0H3sF6VJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuEZF2u6A50LbF0Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]