Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12:34 *AI is willing to kill a human to preserve itself* You know Asimov VERY SPECIFICALLY made the "robots shall not kill humans" the first rule and made the third rule *dependent* on the first.....
youtube AI Governance 2025-08-27T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyfF789O1XsQrBpiKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxJOm4dRX4GbAvs2at4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBrw3-ZBPaR52a9dp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzErGK18e30wreDDwV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyS2_CEjBVNn5dT9KZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]