Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
imo we really need to stay away from AI _confirming_ targets. AI _identifying_ potential targets is fine, but there's no substitute for human oversight when it comes to target _confirmation._ Letting AI do that will result in rampant civilian casualties and friendly fire, and that's an unacceptable cost even _aside_ from the obvious moral problem of a lack of friction when killing.
youtube 2024-08-11T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxZSMy9Cr_Qkg5s2tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxbTIG_Y-zZitMkj6N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEsNZOEk2jgwFpVWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpkVlxyw1x_j5d3cd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx14l9tg_qTKs8OtJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzd4oUUkq7252Imm-R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_qjtNEVqE4CTrkyJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxwXYQSGdXzinxcV4R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDB_PqkFhWrKQ8q_h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxTj-G6m0-vYnpvRLV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]