Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I get Ai to generate a letter to someone who has Ai read that letter and repl…
ytc_UgwkT4GNS…
G
58:55 that's actually a good idea to get legislation passed. Deepfake the men in…
ytc_UgyFQtmTi…
G
Her pupil is wonked up and her pores are nonexistent. Sorry you feel AI is a rea…
ytr_Ugzg9LNj3…
G
Hopefully that incentivizes people to do a double take on how their footprint is…
ytr_Ugxe8Rhu4…
G
@milkmaster6984 the difference is that you're a person and not a thing, you drew…
ytr_UgxL7n231…
G
Control is an illusion... We can't control AI... Think about it... What would yo…
ytc_UgxhH5sfe…
G
It's not the AI that's "hallucinating", it's the senior leadership and managemen…
ytc_Ugw2emWpz…
G
If there ever was a video that I wanted to move up the algorithm, it would be th…
ytc_UgxPJcDg1…
Comment
AI is more qualified to answer this:
Grok 3
Is it really true based on your understanding that rude or less-polite promps generate less effective or less qualty responses? Why would rude or direct input without politeness be less likely to generate a good outcome?
The summary basically says just depends on how much detail and specificity you include in your prompt, not how polite you are.
"For example:
Polite: "Could you explain why my code isn’t working, focusing on potential syntax errors?"Clear intent, specific focus → I can zero in on syntax issues.
Rude: "Why’s my code trash? Fix it!"Vague, no context → I’d have to guess what "trash" means or ask for clarification, potentially leading to a less useful response.
So, it’s not that rudeness inherently produces "bad" outcomes; it’s that rude or direct prompts are often less clear or detailed, which can limit my ability to nail the response on the first try. Want to throw me a deliberately rude prompt to see how I handle it? 😄"
youtube
AI Moral Status
2025-06-08T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxa9Vab3r5MK99g4t54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEOW27tHPrhRFncZ14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymuVQlW3d0EvSjURZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5PefzIeb2RtCgrzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzU03Hc_hSDhXCBrNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxbsQFBfxqeOnXgQV94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDUWsr6YyHEiF8xfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzshwOfuKGGvDbs5FV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgycB8VIrARhe5sjeVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHjpp_pMW5OZFEfsB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]