Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We never going to get there because of the human factor. When we can clearly see, wher it goes, but we are not there yet. When we see, we are going to die, but an other, way smaller group will richly survive. When he predicts 99% of uneployment, he missing the human nature that will rise aginst it and at 30-40% of unemployment will destroy everything related to AI. Machines, profiteers of AI, and even with the cost of technical regression they will clean the human culture from it and turn it back to the old ways. No question. Hundreds on millions of starving people won't watch idly how they die on the streets with their families. They will kill, linching, especially when they unifie wordwide. It can be that far, so AI usage could result death sentence at one point. This is the first thing. The second thing is that the 1% cannot make profit without the 99%. With AI they basically terminate and distroying their customer base, so somwehere in the middle of the process there will be a hughe explison and AI will be criminalized. There is no safety solution, just the complete prohibition, because of the human nature if there is a slight possibility to use it, there will be always a group using it inlimitless. The only option is strict criminalization of any AI usage without exceptions.
youtube AI Governance 2025-09-05T11:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxVVsPIS41pcIP54X54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqBaqunEzZ4RRpK9N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtHTIrgpoYjwgRpZR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxXd7R2Y74LEfqapUN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRnRsFaniMnaHQHpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxchkX6SpYuIBfVvsB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzAJbQ24TKBCZXPw2t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyfe9Gk8jkEvDjUEbZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7n2-jeL0K7ARjRhh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNGEVC94PsjU4WZHl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]