Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes her conscience is living in a robot. We can now be insects, animals, flesh, …
ytr_Ugz0dnVt-…
G
to be honest you could have ended the video in the first 30 seconds. There is no…
ytc_UgzI15LTi…
G
Do this at home. Everyone watching, do this at home. Not only can you have a lot…
ytc_UgxfcvpoV…
G
Just now realized people call themselves the artist when referring to "a.i artis…
ytc_Ugy50bfa8…
G
When Peter Doocy reporter from Fox News, ask Jean, Pierre press, secretary of th…
ytc_UgxNNRXn0…
G
Woah, that is a neat program.
I like that it fools the AI into learning bad meth…
ytc_UgzG2Zu8Y…
G
This argument sounds kind of stupid to me there is a difference in copying and i…
ytc_UgxlS1bgR…
G
Sadly, Trumps Big Beautiful Bill just got passed. Guess what! A decade of no reg…
ytc_Ugx6oEksz…
Comment
Respectfully, I don’t think it’s a big deal. How many people do you think actually cross reference tested multiple models on any sort of consistent basis? .01% of all users if that?
Also, spoiler alert, this is a product design and UX decision. And it’s the correct decision. Their naming nomenclature, user education, etc was absolutely abhorrent. For 99% of users this is 110% the correct move.
You have to understand that ChatGPT is primarily a wide user net product. It’s NOT built strictly for engineers, etc. exactly the opposite actually. It seems like they are positioning themselves to be the AI for the mom prepping meals for her kids, etc. and to those users having 7 different models with confusing names is completely non-intuitive.
I would not be shocked if internal data at OpenAi showed that 95% of active monthly users exclusively used 4o with most users never even trying another model.
EDIT: Most people are shocked when they see actual user data.. it’s kind of like when you play a video game and it gives you a trophy for reaching level 2 and it shows the percentage of players that also achieved it: 28%. Like you’re telling me 72% of players that paid 60$ for this game didn’t even continue through level 2?! Now imagine the scale of users that ChatGPT has, their user adoption rate for their non-4o models has to be absolutely pitiful. Not because the models are bad, but because their product design and onboarding and continual user education is just terrible. Not only that, but it just feels bad to constantly switch models. I use LLMs all the time and even I have to remember which model does what sometimes. Now imagine someone that hardly uses AI. They might accidentally use o3 and think “Wow this must be the super old model, it’s taking so long! Back to 4o I go!”
reddit
AI Responsibility
1754629161.0
♥ 718
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7kx9iu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n7khauf","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_n7ke749","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_n7lskp5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_n7jrln1","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}]