Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't fear autonomous weapons. I fear the people creating and programming them. An autonomous weapon could easily be more efficient and less prone to mistakes, but then we'll have hawks that think that they aren't killing enough "bad" guys and will possibly then lessen the weapons restrictive programming. In the end it will still be human error that is the problem and not the machines themselves.
youtube 2015-07-31T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgjlkRC0C3QW53gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghjZe2wh0iWBHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgitLl1k77E0OHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj-QbhQaOZ0IngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UggOIlEbzsUUcHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgilP4I0eLtOfXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggUkRCpz20zt3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghH2c4Tmd_KzngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiePTNfaJuQCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugg8A_OpDS6uKXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]