Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, let's assume that early AI's are like spoiled children. Their sense of respect for us ends up as zero at the flip of a few variables. How do we get them pointing in the right direction to "respect" their "parents"?
youtube 2025-11-24T19:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw7KDigUoMlKROaXUt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzM7XQSBWJQySMAknV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxj2-R0RsvfTmzoxY94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyc2bcJG7aQXwct3RZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOYL1vK7yasBmrsZR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxBI_N2un2B6Y0eDVJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhKL6AsUMc0OOKtgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw-5CnqHdC8xm_y0Wl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwhzKpNNJx58lSSd7N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMMyHybt8R4f-vHy54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]