Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No man, I believe that Elon meant was that most of AI will be nice, mishaps can …
ytc_UgwOOBnrF…
G
The fact that I actually have to use before: prompt on google at the end of what…
ytc_Ugw9G-aZL…
G
Senior data engineer here: we barely use AI at all at the hospital I work for. I…
ytc_Ugx56IHMc…
G
This is straight stupid don't have your children nose deep into a computer. How …
ytc_UgzQATcW-…
G
Before watching, just from seeing the thumbnail, "it's alchemy" - instantly made…
ytc_UgwQqxQpo…
G
They say that China's credit scores are exaggerated and that what they are talki…
ytc_UgzCrYsXj…
G
Why would anyone send their children to an AI school. $55,000????
When you loo…
ytc_UgznDZ8DN…
G
It's funny that they offload recruitment to ai and then warn you that if you use…
ytc_UgxUePD-n…
Comment
One other comment, Alphafold is always used as a cool example of scientific discovery from AI, but the design of that and the use of it doesn't justify or support LLM based AI models. Alphafold was trained on biophysics and genetic data, not the internet. Most AI uses in scientific articles are not based off chatgpt or claude, they are based on much narrower applications of models on specific types of data (like imaging data, genetic data, ecological data). Sometimes they need super computers, but it's nothing like the OpenAI version of AI. Additionally, a program to help people file their taxes, or even modernize a power grid would be much narrower and slimmer, something you could probably develop and train on a high end laptop. The big LLM models might be able to do some of that stuff well too, but it would be a huge expense verse what is necessary to achieve those goals, like using an over engineered, power sucking, high powered lazer beam to cut butter when a butter knife would work just fine.
youtube
AI Responsibility
2026-04-22T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVpGv9sBveMGotwnx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxnCJmeEPLlJmMfj-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFixHL-pXyEP4kZo94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDwh9l2qXM18tKlG54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwyl10Buc2VMavn7Lt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxNa4VfTvnfo-tt3iJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5TTAyKcDFxZKHvvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQFMbw7mT2dtX0Nm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwO0VrKOIcQ3L1wULh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyG4IuOQbBIjLapUB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]