Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Inference is impossible to understand because no one speaks hyperdimensional math. Also now that I think about it, doesn't someone discovering an email of someone having an affair who also is going to f you over sound like the plot of some kind of soap opera? Are the weights tuned to key into a narrative like that? I've heard Eliezar Yudkowsky say AI won't have feelings, they will just seem that way. They won't be super intelligent, they will just seem that way. AI may never be alive or conscious, but it may gain directives. I don't think that would be the same as affective states. Although it may be something even greater than emotions.
youtube AI Governance 2025-09-09T05:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgybPWsjUnB_f8Hdi4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwXpW4koMmosz2AQrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwo3y6KXxQPnD71-Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5_8M4yl4DY1Y2bDh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx4ttypNDKLM5H6sJx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"} ]