Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
and i was wondering if i do try to do the training idea if it would be ok to use…
ytr_UgyBSUsZ7…
G
AI just isn't up to snuff at this time... just look at self driving cars! Someti…
ytc_UgyYlJJul…
G
amazing episode!!! Can i get the link for the chat with Kevin and the AI mistres…
ytc_Ugw8CuA-d…
G
How can they seriously call themselves ai artist when the ai is making the art f…
ytc_UgxsVdh3B…
G
The idea of being an artist being something you are born with is really upsettin…
ytc_UgyjUxhUR…
G
If AI is going to take all our jobs, how come Chat GPT gets the year wrong when …
ytc_Ugzb3TKWH…
G
This is only about 10 years of ai, in another 5 years it's will be impossible to…
ytc_UgyxDXNMM…
G
The AI is programmed with a standard of right and wrong.
Once he reaches a stand…
ytc_UgwOHWwUT…
Comment
45:00 Generally, I tend to agree. However, one of the most significant recent breakthroughs in AI (the Attention Is All You Need paper) came not from academia, but from Google, a profit-driven company. Same for DeepMind, a private enterprise, which just won the Nobel Prize for solving protein folding. Those discoveries helped ignite innovation across virtually every scientific domain, both in depth and breadth. If more companies built research arms on the scale of Google’s, could they eventually replace academia altogether, maybe. As it stands, academia feels increasingly dysfunctional, as it seems to be a slow, wasteful, and inefficient way to conduct scientific research, often poor in its use of resources. A credible disruptor is badly needed, but it hasn’t yet fully emerged.
youtube
AI Moral Status
2025-08-10T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwSyrmk1Hv14k0dap54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvCsfxkSjYgygfH2B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVJPKbIJUYUf3I4Ol4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygHJGOkrLuBDwdzrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMfUj3wJg5MszdQnl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwWO_aFhK3d-eIwsH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8p7YGjp-2o9OiJK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbB1KgFpmwm71mont4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3k-4zgcoM-5qfLvh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1BIulg81BS8nXNet4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]