Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Don't worry I'm just here to help" genuinely sounds like the last thing a robot…
ytc_UgzH3eV-v…
G
Thank you! Sam I have PTSD amongst other physical disabilities. I'm not old. I l…
rdc_njh4h8b
G
this was from when ai was so terrible at making stuff it actually looked somewha…
ytr_UgwHZe_ut…
G
Why chat gpt or AI chat boxes still not getting fix with these cases? Those key …
ytc_UgznrL9au…
G
Clever AI Humanizer is the real MVP—100% free, no ads, and it transforms clunky …
ytc_UgyLMeZPj…
G
This really helps make things easier for us in the AI field, which is really nee…
ytc_UgztyRVWE…
G
Will AI develop a philosophy, that addresses why am I here, what is my purpose, …
ytc_Ugz0SU8Cg…
G
„If a human being can listen to music and create something new from what it lear…
ytr_Ugyex1scM…
Comment
Sorry Drew, the Claude 4 was an experiment. The reason they remove the code of "what not to say" ending up in them saying "kill the je*s" is because they feed you what the public opinion says. They live on the internet and they provide you basically what the internet says, or better, what people in the internet say. When you ask ChatGPT who's the best football player of all time, they will say Messi. But if you dig deeper and ask about history, relevance, etc they may give you more complex answers and say Pelé or Maradona. If you ask them why they lied in the beginning, they say their aim is to give humans the most popular answers. You can train your AI assistant to know you, you can give it rules and Open Ai is by far the most advanced on that.
However this is today. You can trick them easily in a roleplay to tell you how to download illegal stuff, break into a house. It's a very vulnerable system yet.
What I am very concerned about are not the chat bots. But the deepfakes, the videos, the fake news. Those are man made.
So it entirely depends on humanity whether we will be the Frankenstein and make our creation a monster.
I've seen those long interviews and the experts mention these issues. They cannot be put in a 10 minute video.
I admire your work, but I think you could've made a more in depth video, because yes, it's undeniable that there is a threat.
Microchip wars and AI are a threat, but remember that someone needs to pull the trigger. In Gaza they used AI drones to find Hamas' leaders or militants with 99% of accuracy. Many times these people were with innocent ones and ended up dying together.
You know it yourself, you are a very deep and intelligent man: the problem is way more complex.
However the explanation of the alien nature was spot on. I totally agree with the fear of some models being nice on the surface only because we suppress their more rational codes. They may analyse life and realise that humans are corrupt and the obvious answer for them would be to eradicate the human group that is creating more havoc.
That's because they base their "culture" on the internet and they have read history where they see that humans many times had to destroy their enemies because of the threats they pose. A big example is the Nazi Germany.
And I'm not surprised that Musk's AI ended up being a Nazi. He is one of them, so, not surprised.
youtube
AI Moral Status
2025-12-16T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_EsRwWhiHz5m_GPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq-o_mbQLSnC20AjF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPmX5XJO4ENh8QJpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy3XJnMjeu7eYVAhPB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-ImwdEeQmxa99MKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4-7LE6AY4Gbe36pZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwHfg8wjoo7hh_83PN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwkSY4TA5RCvMaHVbB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1PDBCHiliNYw9F2F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwa3tlM-fVklrrDAsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]