Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It shouldn’t be any harder or more expensive for AI to be able to read a book, j…
ytc_UgxyE_Eit…
G
That's a common misconception! In Greek, the name Sophia actually means wisdom. …
ytr_Ugx64Jvda…
G
You will definitely learn how to draw. Keep trying. It's just that most people d…
ytc_Ugycadqzd…
G
> when an ai does something fucked up the company that made it is safe.
If i…
rdc_dy5098y
G
They are not using the AI right. Chain of inquiry. Inductive and deductive reaso…
ytc_Ugz4q_KMK…
G
They can still get a job, they just need a working visa. Bend over backwards, pr…
rdc_fwhlukq
G
GPT had it when it said you're a Macbook and it's a Sony, but lost it after that…
rdc_mv3k9fo
G
Max Tegmark. He's Really that guy. Now this is CONTENT. In all seriousness. I ho…
ytc_Ugz-xaGPm…
Comment
Hi Hansan - Fantastic. I've loved your comedy specials, so it's fantastic to find you on this channel, especially with - for me - a lead-in with Neil DeGrasse Tyson, another favorite. Hope you don't mind if I make some comments:
(a) I've been a Sci-Fi addict literally from the moment I learned to read (which is a long time ago, as I'm a pre-baby boomer). So I believe Sci-Fi has and continues to describe much of modern and future reality. So, will AGI be practical and useful? Indeed, many Sci-Fi stories have included AGI "agents" as literal partners with human counterparts in spaceborne warships. Imagine sending a probe into enemy space with virtual human intelligence but with no concern about risking a human life. Well, that's not Sci-Fi even today; meet the US Air Force program called the Collaborative Combat Aircraft (CCA) program. These will be AI-operated aircraft operating alongside manned aircraft, acting as "loyal wingmen." In effect, they will be real-life "Terminators", fully armed and capable of independent lethal action against the enemy. And this program doesn't require AGI, just advanced AI.
(b) "Uniquely creative thing that is humans." I believe this is a very anthropocentric viewpoint, the idea that only the human, organic brain can be creative. Again, I see Sci-Fi repeatedly questioning this concept, and I see no reason a silicon-based, or other types of "brain" with AGI, can't be "creative." Even today, as Tyson pointed out, A.I. programs can predict protein folding at a hugely increased rate compared to humans; so is this creative?
(c) Love it; at [42:02] - the flak-damaged bomber, an obvious reference to Abraham Wald's work with the WWII Army Air Corps' issue with "Survivor Bias"!
(d) Academics: So "Markov Chains" - what was simply an intellectual argument between two Russian mathematicians over questions surrounding The Theory of Large Numbers (as well as political and religious matters), leads to the multiple-billion-dollar industry of LLMs. So, indeed, the concept of "knowledge for knowledge's sake" isn't accurate, as more knowledge typically results in the production of truly useful items.
(e) I would propose a counter-argument to the "car is the computer" concept. Yes, this could be serviceable in some instances, but let's think big - like Asimov's positronic-equipped robots. They would likely be expensive, so why put a positronic brain in a car when that car likely sits unused for much of its life? Instead, create a humanoid robot with such a brain, shaped to mimic humans, so it can easily adapt to human-oriented machinery and perform a host of different jobs. Of course, if the economies of scale significantly reduced the cost of those brains, and they didn't go "crazy" from doing nothing for long periods, then individual brains in each machine might be acceptable.
youtube
AI Moral Status
2025-07-31T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy50I9LTbQOzIW6qmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK544fu40G77A15Nt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxgfrKB6zh5tPGQyrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXb1ytsBZylzSvWMp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyF0Mhy2eCwMr6cFIh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiUH3OZn2WEKHSHxF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUVv6TWFLXr_WWvHl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0cZdfAd0oIsgVnjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx4NrPT8MPObCH9shZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrrYwugZsulwE9pRt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]