Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Because "making" A.I "Arts" Isn't your own, it's the A.I's, you merely command i…
ytc_Ugy3Fernw…
G
They're all pushovers. Justifying shitty actions because everyone else is doing …
rdc_o88gktb
G
It can’t make humans more creative? I get it’s self preservation but vilifying A…
ytc_UgzccsIwp…
G
Scary stuff. Something I always wonder with every regulation which relies on the…
ytc_UgwrIOv3j…
G
"the only thing AI cant copy is humane imperfections and it is specifically thos…
ytc_Ugy-fXQbd…
G
This is disgusting. Stop these innovations which will harm the society. Corporat…
ytc_UgxW3ayjL…
G
ai just makes life boring i want to draw i want to talk ti llove ones i want to …
ytc_UgwJN5D6c…
G
There is no AI “art”
It is, by the definition of art, not art
They are pictures.…
ytc_UgzrSsazY…
Comment
Thought 1: suppose a state is reached when the majority of humans have no resources to consume. how will growth be measured by the corporations in this state? in this state, what are the corporations competing for and what is their value proposition?
Thought 2: If AI is as intelligent as hyped, why does it need humans to annotate - can't it learn to annotate itself? The answer is no, and this is how you know it is never going to achieve AGI state. Suppose the state above were feasible, in that state when the majority will not consume, what will be the goal of annotating and training?
Thought 3: Innovation - AI does nothing more than vomit some combination of content generated in the past. AI is not known for generating knowledge. In fact, all attempts at knowledge generations have been deemed garbage. What is the basis for the belief that AI will solve any existential problems like the climate crisis.
Thought 4: Is everything humans have ever done or conceived worthy of replicating? Think of the potential thermonuclear war from last Tuesday or the response to the most recent pandemic.
Thought 5: Are these companies' CEOs god? Who gives them the right to decide who is the "have" and who is the "have not"
Thought 6: Supposed AI does surgeries. Suppose you go down for. surgery and it is unsuccessful. What is your response and to whom?
Thought 7: Read Tim Snyder's book on Tyranny. Lesson one tells you NOT to obey in advance.Why choose to give away what makes you human. As a CEO it is bizarre that you are throwing your hands in the air and giving in that stealing intellectual property is done and cannot be stopped.
Thought 8: These models are not curated by anyone but the companies that produce them themselves. What do you know about these self regulations that gives you confidence that they are implemented in good faith for humanity? Why can't independent pairs of eyes look at them, stress test them, etc. We stress test any materials we use to build anything because the cost of failure is high. Do you think that the cost of AI failure is low?
Thought 9: We have been functioning since the 60s as many small bicycles. There have been successful models, trained on just enough data to make a business difference. Biological evolution is all about a solution that is "just enough", so are most locally developed, small scaled solutions. We have had success with these for 60 years. What is the problem? That you have to pay people?
Thought 10: The rush to "back to normal" and "back to the office" in 2022/23 was a response to people's activism when they found themselves with a little bit of extra time, saved by not having to commute. Do you think that the CEOs truly want to liberate people and give them more free time? Give me a break!
youtube
2026-04-13T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwMmhVjMIvfJnnv7nV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy_UBdxOwfdMpkJU7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6p3CIdEnFXgIdNUJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5PYGaDetBQWTAodp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTFlIZL2gRidy4wuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzgNK9Qyy5W-qP4GhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwL-MN55woqF4uSsS54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwduRSOpUUERCqwotx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwhwV0Mv--3sUqtv54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFlbVxfX1FC2vX_kl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]