Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do you not think that AI would be smart enough to know that if it kills humans it would have electricity untill powerplant, grid and electric instalations fail due to sudden loss of demand and not having humans to actually mantain it if. We were always smarter than horses or oxes but we knew that we wouldt get far wothout them.
youtube AI Governance 2025-06-27T10:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxJO-QVll_iexAsiYR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgztEGX6UuOKGEGyXOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugym3FHA5CgmXpQKLdd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzAakxNAzFVQo26tl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxg2CzZ2GSRistsecx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0VAp--8pu4Cm3XzJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwLKKrN2wWeh_JBhqF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy35GzvJrDNpbCsVd94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy1tjUaOr_vQXTWfcZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzdz4atWQhxgjnmtm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]