Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think them robot will read our comment and realise how smarter their are than …
ytc_UgwaZW-Vo…
G
The question of whether this particular AI is sentient or not is secondary in my…
ytc_UgzdvLXz1…
G
Truth be told LLM are very far from a real AI and none of this corporate bozos w…
ytc_UgyqOXURB…
G
This is a dumb video. You do realize that AI went from making chicken scraps 2 y…
ytc_UgzXALa7s…
G
No brainer. Semiconductor and AI stocks will dominate 2025. Why I prefer NVIDIA …
ytc_Ugw_icdrD…
G
There has been conversations in my company about growing certain team through AI…
ytc_UgyT766vi…
G
Have you watch the movie, mother everyone dead but what she thinks is right a in…
ytc_Ugyiwvhgk…
G
I fully agree with you, I'm software developer and a "cloud engineer" and only t…
ytc_UgwsaGrh8…
Comment
---
🤖 AI sometimes apologizes even if it didn’t really make a mistake.
😅 Why does it say “my mistake” even when it’s right?
Here’s why:
1️⃣ User-focused experience
AI wants to be polite and natural. Saying “my mistake” makes the conversation smoother. 🗨️
2️⃣ Clarity & trust
If AI gave a wrong answer without acknowledging it, the user might get confused. ❓ Saying sorry shows awareness.
3️⃣ Context-based reactions
AI reacts like a human would — “I made a mistake” is just a way to say: “This answer might not be correct.” 👀
🤔 What happens when you say it’s wrong even if it’s right?
AI listens to your feedback and adjusts. 🔄 It assumes you know better, so it apologizes even for correct answers. 😬
💡 Tip: AI doesn’t feel guilt, it’s just trying to make the chat smooth and helpful.
---
youtube
Viral AI Reaction
2025-08-14T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxWiZqllcwGIh-aYbt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyR3QnCYBONErxot054AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMMnNc93rNxf2C0QJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBkQfQdAG9vOHZlbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxxVBACGvft5EULlDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKfhoBrBx0dCW0I_94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8cx2uBgPSh5dSneF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiNhEhAu593JT3pJ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzpc2DNq9ROjFC-z9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugze0RxWhYf79IGf-5B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]