Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A look at the link between AI, consciousness and the Alignment Problem. Very fun…
ytc_UgxIh9qFg…
G
The real problem is the person in the car that doesn’t take over and correct err…
ytc_UgxQw3oxc…
G
Well lets state the obvious here. AI is going to take 99% of jobs at some point …
ytc_UgypdmMnn…
G
Very true, but also kinda whataboutism at this point. I do not think that was yo…
ytr_Ugyrn8Uim…
G
I can't believe we're already here. 10 years ago it seemed like AI was at least…
ytc_Ugxkc8jJb…
G
Why so serious? Humans are exceptionally good at faking the very emotions that u…
ytc_UgxN_Iqa2…
G
This is an interesting breakdown. AI's moving on another level. So much that I f…
ytc_UgxqDBEL3…
G
RIP To the young man. AI is no good for the human race. Though, there’s more to …
ytc_Ugy68hLoD…
Comment
Surprised that no one here is addressing the massive ecological costs of AI—its energy demand, resource extraction, and e‑waste—which are already accelerating climate impacts. Framing AI as a simple “horses to cars” style transition ignores the very real difference: our planet’s ecological limits. Technosolutionism assumes society can always adapt, but oversimplifying like this risks greenwashing the scale of the problem.
Would be truly interested to hear how someone with Neil deGrasse Tyson’s scientific rigor or Hasan Minhaj’s investigative lens weighs these environmental costs—not just the economic disruption. Isn’t it essential, especially in public conversations about the future, to confront whether the benefits of AI outweigh the risks to planetary health, and what kind of oversight or systemic change might actually be required?
youtube
AI Moral Status
2025-08-26T13:2…
♥ 151
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAA3MC9UcR8jr_FtZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqrdEU7lnWriTJTU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz14A5rJAd1rOvAHV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIqRqsQc7pWE-Jjm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQn-egPg5wOhWauQZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPz9KI3h17pYu0WdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxKcsUwARd95Gs0i_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-vtJL40aMP04gScR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk2oRTji4A3edi4At4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRaPUVzGqpDhZ9ytZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]