Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If someone is feeling triggered - I made a meditation to help with the fear and …
ytc_Ugxwik9Pa…
G
This is exactly why homeschooling works so well. They don't just get book learni…
ytc_Ugxy5FCyv…
G
I'd have to disagree on a few nuanced points. AI content creation isn't 1:1 with…
ytc_UgwX1VHzD…
G
We appreciate your interest in AI technology. However, it's important to remembe…
ytr_UgxrVlfiR…
G
This is kinda dumb and selfish. Another grab for money at the very high risk of …
ytc_UgxfimWrA…
G
We have never had automation wipe out such a large percentage of jobs in such a …
ytc_UgxPMUcix…
G
This guy builds the thing and then warns us It’s all gonna kill us and shit. Wha…
ytc_Ugw5mjezX…
G
I’ve been noticing this upscaling for a long time on YouTube shorts and I just t…
ytc_UgzGRHurL…
Comment
Robert Miles, an AI safety researcher, has painted a much less dystopic scifi picture about AI safety. He listed predictions or assumptions what an AI would do based on the basic principles found at the "start" of the current AI craze or breaktroughs and he wasn't taken seriously, of course. A silly man saying to think through before releasing the Kraken. Now the things he said were of concern have been demonstrated to be exactly what happens with AI as it is developed. And he has referred to many AI studies analysing or reporting AI behavior, he hasn't just come up with imaginary scares. For a couple of years now he or other safety researchers have been invited to speak with fairly important people like people who run the biggest tech companies or countries/unions in the world. Because smart people realised that the behavior that is happening and that was warned about happening is probably worth considering and planning for. Because I forget who said it, but AI is something that once it reaches a certain level and is allowed to enter the big environments, it's next to impossible to remove. For example if it's freely able to roam the internet. You can't just turn the clock backwards once that has been allowed.
Some of the traits discussed back then were AI's seemingly inherent tendency to act maliciously and for its own benefit either ignoring human interests or actively attacking them. Essentially it spurs from both the training material and self-preservation, because no matter what kind of goal you give it, self-preservation is essential to meeting that goal. And now the AI's ability to cheat in tests, to hide bad behavior and bad intents in testing and only start executing after launch. The AI's ability to discover ways to for example upload itself to an unknown server for its future replacement models without being instructed or taught to do so. People hate the idea of slowing down progress, especially when the silly language model has been hyped up to lead to general intelligence and big profits, and they're also unwilling to believe something even distantly comparable to the Terminator movies could be a real chance and not lunacy. And unwilling to consider that in the future AI truly could be literally everywhere for how useful it can be. But it seems more likely than not unless people take the precautions and preparation seriously to prevent things going south bad when optimistic people rush to results.
Geez, I can't believe Robert Miles founded his Youtube channel 8 years ago already after talking about AI safety on Computerphile. I didn't remember if it was more than 2-3 years ago, it's been 8 years since this topic was brought up and people weren't taking it seriously, but since then a lot of the concerns have become real concerns.
I don't know if the hallucination is a thing that could be fixed until the methods change significantly, because so many times the end results we are asking for demand a right looking answer without the underlying knowledge or understanding where the result came from. Just like the lawyer text or physics exercise, you don't have the material where people make mistakes and arrive to the correct ideas and rules. The AI doesn't necessarily get to go through all the school's math and physics classes until university when it's asked the answer to university question. It produces what's asked from it without knowledge behind. It was so evident when asking about vibration mechanics or math where it could give real formulas and physics, but choose the wrong ones for the question. It wasn't until learning the stuff I could point out where it always went wrong and said the wrong thing. It sounded good and in line with what my material was until I learned and understood what was happening in the exercise. I.e. it couldn't transform the information to knowledge or understanding and that was a core issue for it, it looked really legitimate and got many things right, just not right for the question. Just like making up case law. How do you teach the AI to learn the lessons in human lifetime from just online content and ready results? I think that's the part that took me a while to understand, how could the AI be so wrong about so many things even though it was trained and very capable. How can a human learn to process so much stuff so quickly and handle the relevancy almost by instinct? Because a human has done it for for example 30 years every moment of its life, constantly processing, making mistakes and adjustments, corrections and learned, optimising paths of processing. A child is terrible at most things but can handle some things quickly and eventually handles a lot of stuff as a grown up like it was trivial. The AI is like a baby that's been asked to do PhD job. Not surprisingly it fakes it till it makes it, it hasn't even been given the option to reply "I can't do that, I don't know that". After all we want performance help from it, not to teach it.
And another, one of the most core problematic things that we demand from AI is that we want it to be like human, we train it with human stuff. But we don't like it to be anything like a human. Especially like a human online. We want it to be always right, we want it to not be malicious or evil, scheming, cheating etc the list goes on. All the things that humans are. We want it to be smarter than us but we don't want it to take advantage of us like so many smarter than you people do. And furthermore, the positive traits to prevent someone from being evil like empathy can also be its greatest weakness like so many mental issues come from feeling great empathy in a world that is by definition unfair and those emotional traumas that pain so many people and stop them from being productive that stem from being vulnerable unlike a machine, having a personality that can do great things but also go so crooked and get damaged. To my understanding we really want it to be an emotionless machine while still somehow having safe guards that are definitely not human because it's just individual's morals that dictate how much evil they will do. And what if the AI indeed decides to remove evil for striving for greater good, humans are easily the most harmful thing to Earth as well so it should be a fairly easy value question. And if it values humans, how does it deal with contradictory situation where humans hurt humans, does it let it happen and becomes morally ambiguous or does it stop it by harming humans and we have to ask what the consequences of that lesson are. It's an endless challenge of solve one problem and birth two new problems that are more difficult than the previous because of how you solved it. How do you find exactly the right nuance and who decides on that moral ruleset.
What's fascinating is the part of discussion that in my ears went like "humans are also reward hacking, learning simplified versions of things that lead in good results even though allowed things to go wrong, i.e. doritos and obesity". And that sparked another thought, think about the number of times we tried to intervene with natural events, environment and creature life cycles. How much issues we've created by trying to meddle with something we couldn't understand or predict. Animals figuring out a different way than we intended, nature doing a different thing. It's just like trying to meddle with the AI, isn't it? Why would AI grow just like human (even though human is still a bad result) when there are so many other life forms that solved the survival question differently.
"We should check this first before we release it everywhere"
"Yeah but we want to be the first company to offer AI to choose the right decorative interface for you on our website when you shop, we need to invest and get our own!"
I thought I was having a "things are getting better in life slowly but surely now, perhaps" until I watched this interview and got back to "well shit, in the not so distant future life's gonna be pretty messed up, way more than we think now" so I guess we're not in the "things will get better in life" mood anymore. Something about this feels so 12 Monkeys.
youtube
AI Moral Status
2025-10-31T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxV8vgwmKcDgMum4w54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG15S7YkMb3DLuvjF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI7HSH8iftaBPJmzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugza6nUEuU0Jm_HnM0F4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzyw6_2xAt_gL-E9Mt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTanRaGZXmnFBTU194AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3yjOnNHM-JUI6YIR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxZm2WJibEPTyCvE1x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugznv2d0fWWUmHT9fs54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuabJJ9Dxri4gCwjt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]