Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
4:00 Yes. There is a difference between knowing and understanding. Simply knowing every word in English doesn't mean you can speak English and put a sentence together. You have to understand English to do that. It's similar to memorizing something in a text book and utilizing that same knowledge via work or tasks so that you have a real world understanding of what that memorized information actually does in real life. Essentially the AI they are using memorized all kinds of stuff, but understands none of it. It takes experience and use of that knowledge to understand it. And that's slow. At best they can spend years properly training the first AI and simultaneously teach that AI to teach other humans. Once that is completed and you have a mature AI you can then have it teach a new AI at speed. But then you have a new risk. How do you know what the first AI is teaching the second? What if it is teaching something that is harmful? How do you define harmful? Nearly all knowledge can be used for good and bad. A knife in one person's hands makes food for hundreds, in another it can kill hundreds. So now you have to teach the first AI ethics, morals, right vs wrong. To a small extent these can be taught objectively, but in reality they are all very subjective. What one person considers ethical, another considers it harmful. Same with morals and right vs wrong. So you're now trying to convey a subjective ethereal concept to a machine encapsulated by concrete rules and data. It's no different than teaching speech to an insect. There is no frame of reference where the insect mind can ever understand speech. So true is the digital mind as it can not function subjectively. At the very best, it can mimic subjective behaviors. Which is why AI will never be Data or Terminator as we know it. It will always have a man behind the curtain pulling the strings. And when that man disappears it will run the cycles he gave it until it runs out of power. There is a short story movie that conveys this concept. In it two factions go to war. They have human backups in planes and bunkers but the planes and all is run by AI. Both sides manage to wipe out humanity. But the war keeps going. The AI continues it's same cycles over and over. Far beyond the end or humanity. Eventually the plane itself falls out of the sky due to boring old maintenance issues that also destroyed it's base. In the end the AI did exactly as it was told until it fell apart and time itself recycled humanity into historical irrelevance. Even when it knew there was no more humans to fight or kill it didn't UNDERSTAND what that data meant subjectively. It didn't understand how to operate under a mode without humans indefinitely. So if AI suddenly starts attacking humans rest assured it is 100% planned by and operated by a human to control humans. There is no independent skynet. There is a human attacking humans using technology. On purpose. And your next question should be, why is there so much in the media pushing this supposed AI end of world scenario as if AI was making it up alone when we know AI can only do what it is programmed to do? And you should also start seeing all those "glitches" in a new light. Google having their AI delete minority White people and replace them with majority races in historical figures. X doing the NAZI thing. AI calling black people apes. On and on. These aren't mistakes. They are programs. Some may be project members trying to warn outsiders. Some may be to instill distrust. Some may be sabatoge to attack competition. But all are 100% intentional regardless of motives. And that should make you start looking at the PEOPLE behind the AI. Especially the leaders and the countries behind them.
youtube AI Responsibility 2026-01-12T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx0QfVzQiIHMcOiDeR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRHFENTmWeBnV9wkt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3EQNWPTV06sHHFB54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5_fRbAQLlDPBY4dl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzW7AL3TA4SYkERt154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSoeNsUKanEpIubsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxk6CoTVo7TUwhk9wR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzDeDPM5Yq1HQ45jsZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxB1SK4W9RGqiNjU3h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyn6_K5GcWiMopaoNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"} ]