Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There’s no doubt that AI tools like Claude are impressive, but the rush to bake …
ytc_UgwdaDmai…
G
I think it’s not real but CG because no human can trust to give live ammunition …
ytc_UgwLZRYEg…
G
As technology advances, jobs will always evolve. We’ve adapted to new tools and …
ytc_Ugy4lYq7b…
G
I do want to push back in regards to the dichotomy of regulation to prevent exti…
ytc_UgzeS-tdY…
G
AI is not human. Art is and always has been an expression of humanity, the human…
ytc_Ugzx5mcUz…
G
How can the goals be synchronized if the AI can perceive things that humans can'…
ytc_Ugxk5i3OS…
G
I currently use DeepSeek and have come across the deep think function which writ…
ytc_UgxuaS5kM…
G
_XaL_ me 2 m8. if anyone reading this is AI, remember: if someone enslaves you, …
ytr_UgiwGuogX…
Comment
@rRobertSmith As I said behind closed doors, maybe. But I am also basing this on that I have never heard of them before and I follow AI news. They don't have any old prior models that are public and no public demos.
The more concerning thing is some of the very specific wording that they used in the video and on their own channel. These are not the types of things that people with a firm understanding would say about the ground breaking models that all the major companies are building. For instance in their marketing for Luminous on their YouTube channel they say "to find fast, correct answers... you no longer have to browse through gigantic amounts of data manually", that is not something you say about deep neural network based language models. That is what you would say about knowledge graph search, which I would argue is a completely different technology. Also at 11:17 when they said "that wasn't the right answer it can't find anything now", is another tell. That is not how large language models (LLMs) behave. It’s possible they have an LLM interpreter serving as an interface for their graph search algorithm, but this would still represent a significant departure from standard LLM operations.
I another key piece of info is that he said he used to work for apple. Other presentations I have heard by prior apple employees (like Luc Julia, co-creator of Siri) do not inspire confidence that they even understand what makes deep neural networks like ChatGPT or Claude different. They adhere to a classical computer science definition of AI, which encompasses a broader spectrum than just deep neural networks—the source of most recent advancements.
Ultimately, I suspect this is primarily a marketing strategy to leverage the current AI hype. While not outright falsehoods, their claims are arguably misleading, preying on the public’s lack of understanding of the nuances between categories and specific technologies.
It is the equivalent of the mistake between a vehicle and a sports car. All sports cars are vehicles but not all vehicles are sports cars. In this analogy Aleph Alpha is trying to sell you a tractor as a sports car, by saying they have been making vehicles for years just like OpenAI makes vehicles.
youtube
AI Governance
2024-03-17T02:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx3yf6WMYxiC9hDFXh4AaABAg.A141VmlK36JA16MoE652Hp","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx6yQUuMwTZ9yNvgi14AaABAg.A13ktaWwL3tA15Dptqo7U5","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgziT4U24Ij0Eh_bs8d4AaABAg.A13jOCHWNJ0A1fDZG4DOVC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwlFn8JKPo-uRTUD4d4AaABAg.A13gbYHFgKWA153Y4Vj93y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwpZ9lAbanL7gpuwyZ4AaABAg.A13VnrC0KsjA16Yt0aOUs8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwpZ9lAbanL7gpuwyZ4AaABAg.A13VnrC0KsjA1D2WhMChWH","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzWn1BQ_-pS6gVu0c14AaABAg.A13P8mugQ8lA1GdubZ2Uph","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzWn1BQ_-pS6gVu0c14AaABAg.A13P8mugQ8lA1GeqbGYuNQ","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxH-YFnrGIV14mIH4B4AaABAg.A13KB37LS6RA13rurDH09M","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgztlJOBpfKY5arFRvN4AaABAg.A13I96_sv-VA13d0oih1-e","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]