Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do academic and scientific books and articles pay compensation for the hundreds …
ytc_UgwG1CdIv…
G
This is not funny and to advise on caution when watching this they are trying to…
ytc_UgxnPnACd…
G
My dude, WE'RE going to build the Terminator factories. It's not going to breach…
ytr_UgyEPONeT…
G
I feel like AI is suppose to be used as a tool, i dont think people have the rig…
ytc_UgwOL-qEQ…
G
6:30 That entire segment made my blood boil. So all it does now is encourage peo…
ytc_UgxeGFIu0…
G
AI programs do have access to all of that information. It's all online in millio…
ytr_Ugzqhx0px…
G
Yes, the risk is real — but not just because AI is powerful.
The deeper risk is …
ytc_Ugz7PmjDB…
G
Everyone is now signing up to be a plumber, physiologist, or doctor. Looks like …
ytc_UgzfYi3JX…
Comment
I don't really think "it's not even good" or "it doesn't understand form" is a good critique, though. I think given enough time, it may in fact be able to figure out those "issues." And I don't think we should focus on whether or not it's any good, because it shouldn't MATTER how good it is. Because every time you make that argument, it gives the AI assholes another "proving you wrong" moment once the models are better. Proving that they're not as good as an actual artist isn't important, because the core issue with AI art isn't that it's poor quality. Plus, the people you're arguing with don't care if it's good. They're either corporations or people who do not care about art.
If anything, I think it's preferable that it is bad. If they get good enough that they're actually undetectable it's going to be even worse.
youtube
Viral AI Reaction
2025-04-03T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZdq9lKK8FB1zvQWp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxeB6Jzk5t860R03vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_VhJE7v6gP0MphQV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzP9tDY5iTP5REo6QV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw6eHMUaRCmn8NpkoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwXUHOifw3Dmbv3N0l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"hope"},
{"id":"ytc_UgyNjCfYF1qvLqUiD594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_FFnGErKEAkCbsyx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOJxaiuoPUwJSt0m94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOi4yd_meuWzn-D2d4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}
]