Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First of all, I would like to thank Mr. Roman for being a guest. In my personal opinion, artificial intelligence cannot even compare to a human nail; it focuses trillions of datasets and processing power on a single purpose. You would never see such a situation in humanity. One must not confuse priority with responsibility; geographical and genetic ties cannot determine the priority of a being within a limited cycle. As you develop, you consume resources and, in equal measure, become aggressive. Because as your domain of influence grows, you absolutely resort to violence to protect your limits (benefits). Of course, some are addicted to ruling; there is no other way to feed the parasites within them. Superintelligence may be more merciful or fairer than you, but do we actually want that? Are those with initiative simply losers trying to make their simple lives meaningful by controlling other lives? Debating the correctness of a decision whose results you have not seen is a waste of time and energy. *Some decisions require absoluteness; so, how much of an absolute component do you have?* *(I used 'translate,' I hope I have not conveyed a wrong implication.)
youtube AI Governance 2026-04-19T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyi3pLC5TNteZKtRsl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzY9SpYSNuvKnmiXl14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjVGRISnbQ_h1BUUh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxnMiaSePTqEyeXPkF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyz-0zwbh8jKD8ICFx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzcn5ZLLA56giLLTdx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwo0MsgyhSbFrGE10x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz_ZsvN615IytJOhBV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJfAQotUfALcfOmcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrwuDyEMGhQZVtBhd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]