Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wow. I'm studying at a german university. The professors use AI as if its the be…
ytc_UgzI4L7GB…
G
And what about the likes of one individual I can think of who not only harassed …
ytc_UgwZ7r4rn…
G
Not a chance. A.I. has read millions of books and movie scripts. It has all the …
ytc_UgzBNEifU…
G
If I call dominos I don't get credit for making that pizza. I don't get to say "…
ytc_UgwBwArYJ…
G
Not being rude but not gonna lie but her art looks like ai except with no mistak…
ytc_Ugx5Ir220…
G
More facial recognition please. Crime is skyrocketing in the cities and innocent…
ytc_Ugyv84ypU…
G
@raymondfranko2894
Didn't see that one. Give me the link. I saw quite a few tha…
ytr_UgwX8XJ2s…
G
lucky us they are (still) electrical. Time to invest in a fairly large EMP. 🎉 Gr…
ytc_Ugw1Zh1Or…
Comment
The AI slop layer is tolerable, as long as you are the one writing automated tests and your test coverage is sufficient. You don't commit any code unless your tests pass. Don't let your AI agents touch your tests. Don't like how what your AI's implementation is doing or performing? Add a new test case. Vibe coding is the current buzz word, but the buzz word from the last decade or so is TTD. I suspect the coders and software companies that *properly* followed this approach are doing just fine right about now.
youtube
AI Jobs
2026-02-06T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOAGYJqQJZNXiOqoZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzXVKeBSdHAzqhF0LJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwK7_ixZnc95ZBySAF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzL2OsFjkgsgyWLlMV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxTso8uwltMTMJE8Ct4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw86jtv3GeyQ6cLah54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxs2wBy4SwHgETWNhF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgygJMC0qAmr_mwoajZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygVW9ZjADTypr5Dv94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwrasYhq_7QB2Uq7zV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]