Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In an interview w/ Ezra Klein a few years ago, the scifi author Ted Chiang said when asked about the possibility of our creating a conscious AI ("moral agents" to use his term) said (paraphrasing) that while we probably *could* do so, and so possibly *would* do so, that we absolutely *shouldn't*. For a pretty simple reason: that in the process of getting from here to there, we'd almost assuredly create an entity capable of experiencing inconceivable (to us) degrees of suffering long before it ever became capable of articulating that suffering to us in such a way that we could recognize or care about. An entity, in short, whose entire existence was that 45 minutes on loop. I haven't been able to shake that argument since.
youtube AI Moral Status 2023-07-04T22:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwprTIEQMFtni6NxRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw27FpfK7sEKLlMBSp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwSiT3QEfazmgqqG5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmF7o-erGCQvixtxJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgypRn2sJx-EoWIhunx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgybMvo9U-mB28Bh0S14AaABAg","responsibility":"researchers","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwDiiSiMzQ-ZrfKpxh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_cVMKFwepJJMfMp54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyyzgq7JOxYLDaZaAh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxdcaHg3D6UupP7MYx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]