Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just read about an AI drone simulation by the US military. In which the AI decided to kill the operator and take out the tower because it was told “no go” on a target it was programmed to destroy. It then proceeded to destroy the original target.
youtube AI Governance 2023-07-07T13:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXKQRr-ubdY-jU8y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxvf1p10agInz7XYOx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxbZEg9sRgJQErXs514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzMdxxdrwRUuF6Vi3N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyyGSCte8LUaCQqcMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6jUBZBsAycr8jJOx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyzP2az6In_7bVEUhN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxXuBO8OdAw_0abcpB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxsuMIgTXvPSQpqJpl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmmzB4Z6QlaBBKgvR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]