Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay, you obviously do not have to trust me on this one, i looked really quickly (seriously, no deep research) through this channel and here's what I found: - half a million subscribers, yet 1 million views on this particular video. Not an abnormal ratio but slightly suspicious. - channel's videos get either ~300k views, or +1mln. One video even has 3mln. Highly unstable views ratio. - incredibly low like number to views. This video has 1mln and only around 50k likes. The video with 3mln views has below 50k likes. Videos that have ~300k views get only ~15k likes. - incredibly short channel description. - no socials, link only to patreon. - channel has been uploading videos explicitly about how evil AI is for 4 years now. - generic channel name. My own conclusion: Channel seems fake. Especially with those unstable view counts. Let's compare those numbers to @struthless channel (life style. He has 1,12mln subscribers and gets around 200k views on his videos with around 30k likes. Traffic on his channel is 51 mln views. Now @DigitalEngine channel has half a million subscribers and gets a million views with only barely 50k likes. Traffic is over 90 mln. Incredibly high numbers for channel half the size of struthless's. Sounds to me like someone's buying those views, likes and subscribers. Additionally, they seem biased negatively towards AI, or should I call it LLM, so naturally they will create videos against such technology. Now, a bit of background knowledge for those, who don't know: LLMs (Large Language Models) are a very fancy and complicated "fill the gap" systems. Their main focus is to finish the sentence (and the whole message) as accurately as possible. In order to achieve that, they are trained on our culture and the Internet (books, movies, blogs, and whatnot). LLMs do NOT think on their own. They require a prompt to finish. They specialise in finishing the sentence so they, by definition, cannot start a conversation. You might have noticed how in all of these models, whether is ChatGPT, Gemini or whatever, always the user has to start the conversation so the LLM has something to finish. Okay, end of class. I highly recommend @3blue1brown series on LLMs for more detailed and accurate information. Now back to the conclusion. In this video it is shown several times how those "AIs" want to destroy human race. Let's take a step back, look at our culture from couple of years ago. We may notice that we have feared the extinction by Artificial Intelligence since ages. Thousand of books written about it, hundreds of movies made about it, and god knows what else. It wouldn't be strange for the LLM trained on our culture to play along. After all, it is "fill the gap". And if the gap says "Will human race be killed by AI?", well of course the answer is "yes". As a side note also: as for now, there are no proven, peer-reviewed science papers proving that those "AIs" are conscious and human-like. All I want to say: don't get manipulated on the Internet, check your sources, and remember - Artificial Intelligence is a bit too much for a fancy "fill the gap" algorithm. (Good gosh, i spent waay to much my valuable time on this short and shallow analysis. Peace)
youtube AI Harm Incident 2025-09-05T08:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugy2Kt2tRbY8MTrydfl4AaABAg.AMjyxdEOQ7wAMk9YHzI06l","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgyvySWVHQusOzVp6sJ4AaABAg.AMjHyTjarVNAMwT7cQABvY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz12qgrx09bm2M6dDB4AaABAg.AMhhvzF1R46AMurVVbW6bE","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgyUbOTOPdiYDId-7pJ4AaABAg.AMhav4eDVynAMkH26jiof4","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwN_-VF7GI-xll-wWF4AaABAg.AMh8xXVKs8aAMkLm5oiIsm","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytr_UgxhPVvfEyY_O5hniY14AaABAg.AMg95Et2MLgAMgDpnxS4u6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw_FGdWCh_7FZzMmmN4AaABAg.AMftPe8F22lAMkNabAAXol","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugzkaxe1LnbpCfvUTvR4AaABAg.AMfflWkAjfpAMtTA1Xm4Y8","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgztRvDUxHWY0qPrQzJ4AaABAg.AMearoHwI_BAMkQ53QwOHA","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytr_UgxM6-b2pc-VV8pJIV54AaABAg.AMbT49CT3PtAMkSNLeFuyP","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]