Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fascinating. At first he proclaims AI understands what we are talking about when we chat with it, then he questions if AI realy understands what it produces. I say as long as AI can't recognize that it's halucinating it doesn't understand anything. And to just suppress the halucinations doesn't help solving the problem of understanding. You're just hiding it.
youtube AI Responsibility 2025-12-20T11:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzJD4677wXn6ZZa2BJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_813MxAtv1gyK4u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzNAd02qBx7Noc0mrF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwXUSXxGlVLzkXcoG54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz3CmBvEbqmY9qZ6D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwtnqL2wcNYfTPSUgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyGTmI9WYL0ou-ANXp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzLdppqLlP8mQaAQyN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYZrYtNmu4CTLbu6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFo5pY00-f8IVodaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]