Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
open ai is not limiting itself and lying to themselves like it seems these germa…
ytc_Ugxk6wTUm…
G
The moment AI takeover this kind of work is the death of creativity in the indus…
ytc_UgxL1lR_H…
G
What happened to “being careful with your eyes on the road “not”drink a fucking …
ytc_Ugw3SYwVj…
G
WTF? Wages for an agent? Rights? This is woke-level empathy. Anthropomorphism. …
ytc_Ugx9vDqNa…
G
The elites believe we are a problem. Their goal is to replace over half of us wi…
ytc_Ugz6s1dKe…
G
In 2025 AI still cannot write a good real world code. It can still demo. We huma…
ytc_UgzzmvrL9…
G
You video is well build but...
You are promoting a very dark path of the use o…
ytc_Ugx018TCE…
G
If I were to ask you to draw me an anime catgirl, how would you do it? By remem…
ytr_UgzbeR0Hr…
Comment
I support Ed's message 100% and yet there is something he neglects to talk about: in a world where AI companies do pay for all their training data, how can there ever be a mutually beneficial model when no matter how clean the data is, the end goal IS to replace human creativity? You can pay for art, writing and music and make it all as fair as possible, but the objective is still to make AI so capable that it outperforms and outsells every artist, writer, and musician going forward. What do you say to that, Ed? Isn't there a moral argument beyond your legal and ethical one? If we don’t examine the morality of this whole enterprise we will still lose human creativity in the end.
youtube
2025-04-12T20:1…
♥ 41
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxvcHb4Vde4mMxiNSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzqjKHaYtMFOlrGo7B4AaABAg","responsibility":"industry","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysmYUJoFDbz1LZSmV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjQiiv6_h6Hn9LIDZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpFm3GMA8AtmYRVHB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLrmmEzx0GmwhSa3t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLXYtOuaxvub-J_Yt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEqw3Wd37fn6t9YH94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBFdAoVjQrY11TJJx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwg8CJfojk22QSfhR14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]