Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it will, eventually, be the end of us, the human race as we know it! There are powerful people, with a lot of money, who are and wanting, to create "the best" AI anything and, in every area conceivable. Are these people or companies, are they stopping, to sit back and seriously think about what they are truly creating? Or, even, posing the question to themselves "should I do be doing this?". Its a race to be the best, have the best, have more, be better than the next company. Are you not truly realising that you are creating "things" ,that can and likely will make us, as human beings, totally obsolete. There will be no need for us, and we will be discarded, and / or destroyed. And to think they have any control, over what they have created, is sheer arrogance! The question is, for whomever is trying to create the best of whatever they are creating, will this be the beginning of their own downfall and essentially, for the rest of us as well? This is my opinion. Im not going to answer any comments, so say what you will. Your thoughts will be quite telling, no doubt. But, to anyone who is "creating whatever " please 🙏 stop and seriously think about what you're trying to achieve. You want to create something super intelligent, how do you know it won't outsmart you? I have no doubt it absolutely will. Then what? Turn it off at the switch?
youtube AI Governance 2025-12-30T01:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwzCyjCUF-2VbVObeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzd_SMDvrPUarZGn0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3ca9Tz6Zi8dKTcBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFbHFPMGoSKTWcWG94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggngRVRPyTk9ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggO6O85dfHuGXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHhQc6DfMbJKuBi8t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzI9lD2uSSkFLzE0714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVixgAOJmbs3qCGNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFmZZWfJoCZf2Ef0Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]