Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Civilians: Can we get Autonomous cars Politicians: Oh They are dangerous it cannot make ethical decisions when faced with a .0001% case Army: Shall we use AI in wars Politicians: Oh yes, so what if it is prone to make mistakes 10% of the time. We all know who the real killers are, but we are powerless to change anything.
youtube 2025-02-05T11:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwga-mtkaY6aBT4PPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxyyIGcCLyJ5vz85Ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrV3tgciZjRyIIjrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8DXKguZYXO9S_o8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxn6j45qnKbsQ4Q-p54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxywx2n4h_JqMoJNAV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzt4WBjVsG2VsDeDM14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwCOF-v_tYrlHY0PjJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwvHiIQIPEmHGPXBjh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxl9ns0yfsVFCe1MTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]