Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now the CEO of OpenAI gets sued by his own sister because he sexually abused her…
ytc_UgxI7I478…
G
Dont worry, the big beautiful screw made it so no state can litigate anything AI…
ytc_Ugx7Ml5-V…
G
they lied to you. Has nothing to do with AI. AT&T with the help of Tata Consulti…
ytc_UgzP2lkpH…
G
@reubenoakley5887 It baffles me how there can exist humans who don't understand …
ytr_UgxDyU6B_…
G
I absolutely hate AI, I wish we could go back to 2015 or something, I miss the o…
ytc_UgxvNrZuP…
G
It's impossible for a large language model to advocate suic-ide. It's impossible…
ytc_UgxEhVO6U…
G
I think your art is lovely. I'm a software engineer who is skeptical of AI. I …
ytc_UgwuyapYm…
G
This is good on the net shortly as AI learns I would not wanna be this guy ...…
ytc_Ugz_OC6j6…
Comment
Let me preface this by saying that a lot of the problems I work on are more technical than what most people will deal with.
I had the opportunity to cheat with AI. GPT became useful sometime midway through my junior year of college. I used it to study, but it wasn't (and still isn't) good at solving math, whether it's working through it step-by-step or via code. It never felt like a tool that was worth using even if I were to cheat (which I avoided). Even now I occasionally dabble in textbook problems that interest me, and no AI is capable of solving them without informed direction.
In my career my interaction with AI has been a mixed bag. If I'm learning something new then it's usually pretty good at getting me started, but as soon as things become slightly complex it becomes very impractical. Even my paid subscription for GPT (and with any model within it) will get extremely "confused" if a prompt is too long, and it will always provide a solution that doesn't work, and then only through manual editing I'll arrive at something that feels patched together, and by the time it's refined I feel like I could have just taken the time to really learn the material and then I'd be able to do it again much faster in the future.
Even for simple tasks like rooting a phone for the first time, which I did today, AI was much less helpful than typical instructional articles I used. It replaces reaching out to forums for help because of it's quicker response time, but it's often too verbose and not totally accurate. It's like if context were a spot to focus on, it loses that context or never really has it to begin with.
So, imagine that more technical fields are like English:
If you were taking your second college English course and were wanting to write an MLA format essay, 5 paragraphs with a thesis and three arguments, each argument having a paragraph of its own, then summarizing the thesis and arguments in your conclusion, then you would need to provide it with the desired format, thesis, arguments/order, sources, desired tone, and ending message. By the time you have the material for the prompt, you've done all of the work and it's just assembling.
That said, plenty of people who graduated with me just getting partial credit from AI's bad responses or just copying chegg. We all know who they are, and when their name comes up because they applied at our company or someone else works with them on a project for two weeks and realizes, it will have caught up to them.
youtube
2025-08-03T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyHbdYBJnTeqWalTa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkdCumIWA1BnGhl094AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgypIkhMlp8DycWVAr94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYnX2BHkOt2TTDkhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymowSH8oosZZVTTmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugztn7LZQI3qPGzYsFR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwToL80ztSVW7mEt2J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsONQ820CN_KP7bAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxruYJ6sAYgq2zia6d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyWZg6-0euZ6l2mLLp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}
]