Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai art isn’t even your art. It’s literally telling someone to draw something and…
ytc_UgzYpx0gi…
G
Nope. It is important to learn how to research and how to write, by your own, wi…
ytr_UgwtsRLem…
G
People here are so obsessed with ChatGPT that when these stories of delusion com…
rdc_my66bl6
G
AI art is the death of the human artist IMO. All that will remain is nepotism an…
ytc_UgyTElklz…
G
It’s not Art it’s generator 😭 Ppls stop saying “Ai Art” it’s “Ai generated pictu…
ytc_UgygaC6Lm…
G
But... When you make robots that do most tasks, and can make themselves, labor i…
ytc_Ugzf92tMr…
G
How would they be able to ban AI art? Some of the better models are open source…
ytr_UgzTDdzPZ…
G
The second AI is in robotics, they will start reducing population since they won…
ytc_Ugw2RC36L…
Comment
I've just started the video, but my main concern about A.I. right now is how it is eroding our already flimsy ability to agree on reality. In my experience LLMs are pretty bad at answering questions accurately, and yet a scarily large proportion of people seem to have complete trust in them. We are hurtling toward a future where no one can agree on anything, not even fundamental reality. So far in my life, I have seen no compelling use for any kind of A.I. system. There are certainly some convenience features, though all of the ones I've seen come with some pretty serious negative consequences, either for the user long-term or for other people. It just feels like people are actively choosing to prop up incredibly harmful shit with no regard for the consequences (kind of like what happened in the US in November of last year).
youtube
AI Moral Status
2025-11-04T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdD362N-69jb_GqO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFKzdZ6IS3bSjeDGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK8vNHvAAC4qgyPZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi9ZyCrLQY6-3cWCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHDlDtpu7Dv0PEtkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8TKA8OgiK9y0qax14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMJI7gRBEnkFgn6JB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcNk_cuVklAe_4VVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGrgrKNaUKIJiZ74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUMsFWYfQOUsLfRIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]