Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not destroying AI Art. I'm sorry, but this is only promoting it. I belie…
ytc_Ugz2SO7dj…
G
Lets first finish the development of VI completely and then start on the AI. K? …
ytc_UgwXilX8r…
G
My book pax Unitas will paint a whole new imagination of how we see ai and it’s …
ytc_Ugx_xR9Hr…
G
For the GOAT science communicator, I find his (non) analysis of AGI a trifling, …
ytc_Ugw2I31uJ…
G
Unlikely, the Federal Reserve and the mass printing of fiat currency will wipe o…
ytc_Ugyy2AXf-…
G
For the millionth time: ANTIBODIES ARENT AS IMPORTANT AS MEMORY CELLS. YOU DONT …
rdc_g9w9xoq
G
I spent a grand total of 34 hours last week drawing tarot cards for the lead act…
ytc_UgwSzBcRK…
G
ai art killed my family im so glad we are fighting back.
but fr f*ck ai …
ytc_Ugw9Bi0vd…
Comment
While there are fun parts here, on the main topic I thought this was a really disappointing interview because as an evidence-focused scientist Neil deGrasse Tyson completely glosses over the huge failure rate of this type of AI. It _cannot_ summarize science papers because *it gets half the facts wrong!* Nor can it ever be used as a reliable source for anything for the same reason. With your insulting dismissive dig at creatives' criticism of AI you also completely ignore the fact that this AI has been 100% built on *stolen work* taken without consent or compensation. "Don't wory, just innovate!" is not helpful advice to people who are being robbed. The idiots laying people off for AI don't care about innovation or quality (they can't tell the difference), they just want cheap instant results with zero effort (ignoring the hidden cost of subsidies and energy, which will raise prices astronomically).
youtube
AI Moral Status
2025-08-25T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAA3MC9UcR8jr_FtZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqrdEU7lnWriTJTU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz14A5rJAd1rOvAHV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIqRqsQc7pWE-Jjm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQn-egPg5wOhWauQZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPz9KI3h17pYu0WdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxKcsUwARd95Gs0i_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-vtJL40aMP04gScR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk2oRTji4A3edi4At4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRaPUVzGqpDhZ9ytZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]