Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not a problem with AI or its data sets. AI is mirror reflecting the reality…
ytc_UgwYWniGd…
G
@neiklen4320 Where is your empathy for the AI engineers who spent hundreds of …
ytr_UgwJImAiM…
G
22:48 I noticed this with Facebook around 8-10 years ago, I loved seeing posts f…
ytc_UgxEAweev…
G
So the real bottom line is we will need to adapt by incorporating Ai into humans…
ytc_UgzVZDEDh…
G
What if AI actually run Google, and they hired and fired people after AI doing a…
ytc_Ugy4xUvef…
G
I work in AI dev. If you say anything a reputable assistant has been trained to…
ytr_Ugw1_RjJk…
G
Should be pretty simple for them to do basically right away. I’m not sure how it…
ytr_Ugyk5SD2I…
G
Chatgpt: you cannot misuse our AI, we've put in many safety checks and precautio…
ytc_UgxxH38ZR…
Comment
This is certainly one of Neil deGrasse Tyson's best podcasts. It is a profoundly interesting and revealing discussion about AI. This video answers a lot of questions and explains a lot of misconceptions about AI, including some philosophical questions, so I found it really satisfying and remarkable to watch. It fills a lot of gaps, and answers a lot of my own questions about the progress and nature of AI, and how it compares to human intelligence.
The most interesting part of that YouTube video is the last half, and the most surprising of all Hinton's responses to their questions was his assertion that AI can now diagnose better than human physicians, and AI is better at managing healthcare than humans.
Both of those claims are hard for me to believe, because I suspect that older doctors with decades of experience, especially surgeons and family doctors, and the best ones, like the top 5% or 10% of them, must still be superior to current AI at diagnosing their patients' diseases or medical issues, and especially in the case of patients they have been treating for many years and have a good personal relationship with, because they would be better than an AI at listening to their patients' complaints, understanding their needs, communicating with them, and an AI can't physically examine a patient.
Also, I heard current AI experts describe the numerous failures of Microsoft's HealthVault platform and HealthVault Insights for analysing and storing health records, which is consistent with the failure of other AI deployments in healthcare, because healthcare data is notoriously difficult since it is often fragmented, inconsistent, and highly regulated by strict privacy laws, regulatory approval and clinical evaluation rules.
However, Microsoft's Cloud for Healthcare was a major improvement for analysing patient data using a cloud ecosystem from multiple healthcare data systems and focuses on augmenting clinical workflows instead of replacing humans. For example, Nuance Dragon Ambient eXperience (DAX) Copilot, automatically generates clinical documentation from physician–patient conversations. Hospitals report that such tools can significantly reduce administrative workload and physician burnout.
Even more impressive, and probably what Hinton was referencing, is Microsoft’s AI system called MAI-DxO (Microsoft AI Diagnostic Orchestrator). It uses multiple large language models operating together in a structured reasoning process. That AI performs steps similar to physicians, who ask diagnostic questions, order tests, evaluate results, and formulate a diagnosis.
In experimental tests involving 304 complex medical cases published in the New England Journal of Medicine, the system achieved diagnostic accuracy of roughly 85%, compared with about 20% accuracy among physicians working without external resources.
In fact, that AI system selected diagnostic tests more efficiently, reduced unnecessary procedures, and simulated collaborative medical reasoning across multiple specialties, and it was especially good at complex differential diagnosis.
youtube
AI Moral Status
2026-03-07T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzs9zSfS5uGXFVUA6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK71h7YHSgb0HUE3x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzX-cL4oJjigJgSeIh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEl2Od0pjUIemK4_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5FXsnRx8eHGqMrhJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwD3nzUI__8xdc-2WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxr4f7PkSQxQXGsGH54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2LZm1LvQt356Lxrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFHzFrHqXnPiEZD0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzg10KRr6shQAsW7t94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]