Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
However scary LAWs may seem, we're still talking about weapons, which purpose is to destroy, kill or maim. This puts these equipment in a specific context where the risk of destruction or lethality is not only factored in but pursued. As such, they don't represent such a fundamental shift but another step towards greater efficiency in achieving tactical objectives. One could make the case that LAWs could eventually prevent friendly fire, which is likely to remain a greater risk and bigger killer of troops than loitering weapons. As far as the greater risk posed to civilians, we have ample and daily evidence that humans do not need bots to target willfully unarmed people. LAWs won't change anything in that regard. So let's not get confused by the "weapon" in LAWs and be clear that the actual risk will come from AI-led autonomous equipment deployed for mundane operations, as far remote from military conflict as can be, that for reasons of misalignment or hallucination start chopping humans instead of picking up potatoes, or causes trains to crash into each other rather than orchestrate their smooth travel.
youtube 2026-03-11T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyShxlBMXmwUeb47Ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQIiLNX5LlyFLQPNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-ljCMPkKCBFTqprt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwCbQ32d9CErkaM57R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzu8FpW7D0LnOwKaB94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxs6mgMafn0SVm2X514AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysX8jMPkKJLalfXI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwZyGNN85JJ2D4IC_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgznvGQeDAEWJs7uv6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRO_K27xt6CJ-HcUN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"unclear"}]