Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI and automation are GOOD, though. It's annoying when fellow leftists get luddi…
ytc_UgxAa6ryt…
G
They’re 100% using thousands of bots to push this agenda
It’s in ai companies b…
rdc_obwh52a
G
Well this is just a warning for man kind that world war 3 4 5 will be the cause …
ytc_UgxiQlmxm…
G
hello ai bros, did you know a pencil IS technology and if you say artists that h…
ytc_Ugwg14_cb…
G
robots will not be plumbing in 5 years. All I had to hear to know this guy is br…
ytc_UgxMJXUoD…
G
I am dating someone who is going into graphic design and posts their work online…
ytc_Ugx_nqqkP…
G
The misogynic culture here is high and loud. The fact that porn became illegal i…
ytc_UgxPjWFY9…
G
Thank you for yet another inaccurate explanation of how large language models wo…
ytr_Ugy8dpRnN…
Comment
Here is a conclusion from Gemini when I asked if using please and thank you costs money. Conclusion:
While adding politeness words does incur a slight increase in computational cost and processing time (measured in tokens and energy consumption), the data suggests that it can be a worthwhile "cost." The potential benefits include:
Higher quality, more accurate, and more comprehensive AI responses.
Improved user satisfaction and a more natural interaction experience.
Reinforcement of positive communication habits.
Therefore, while "please" and "thank you" add a small, quantifiable cost, they often contribute to a more effective and beneficial AI interaction, potentially saving time in the long run by reducing the need for follow-up prompts or corrections due to unclear or biased responses. Sam Altman's sentiment of "tens of millions of dollars well spent – you never know" highlights this trade-off between immediate computational efficiency and the broader value of human-like interaction and improved output.
youtube
AI Moral Status
2025-07-03T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxWct-DMktSzO0FFp54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-_AMeqC33hBX4PWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBLX5twu4irnY15GZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt_0dLnlIxoTy0Zex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwefRvn6ca3LMVTLYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0JqEk16j_DQ14WKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8ls1m8GdAiwOtOfB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwx53YVL2QHIHAYltJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywAXWpYaInx54B-AJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAn2by2C84FDrz47x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]