Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the reasons I left my last job, you already got my entire handprint now y…
rdc_iyytbiq
G
FBI sponsors facial recognition studies at universities, so we’re getting close …
ytc_UgwN_-0Tj…
G
for your name to be "seeker of truth" you should go seek some truth. where did y…
ytr_Ugx-ywElv…
G
This. And seeing how China views IP theft as their CCP given right, vigilance wi…
rdc_gt77vzt
G
You know, it seems like some people are okay with _using_ AI until they find out…
ytc_UgxqQfw76…
G
there MUST be regulations as to what percentage any company can use AI AND…
ytc_UgwgMoAAX…
G
Track them out, take them down.
They should just make AI art conventions and F o…
ytc_Ugy7JAWmx…
G
Sounds like using unreliable AI instead of a human being for these kinds of situ…
ytc_UgyvgKpzM…
Comment
title is why you should be polite to ai, in the video she doesn't say why, she just says you should get the ai to roleplay, which is 1/4 of a prompt strategy, then says because it's roleplaying, you might as well be polite to it. Being polite to ai wastes tokens and often encourages it down the fastest route, giving a worse answer 🫣🤣 and costs millions worldwide 😅 in terms of sustainability and performance it's actually bad. Dont be polite to your ai. Nice lady, poor video.
Google "does being polite to ai" and see what the autofill suggestions are.
Even sam altman himself has stated please and thank you alone costs openai millions.
Its a 2/10 for me on this one
youtube
AI Moral Status
2025-11-28T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxJeUyrrEH51FqQC0V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOml7frZYIxfeaWgN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoQXtNm_i7kMMm_414AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzpusYY-VKg6-uDvu54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1X5FBhg_2-Tdl-np4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxlJoAe2Fhwu8Dho6V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfR8n8XMeE1Lwk1N54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqU1iV4eTx88taDud4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGPyLI3S5s0U4qQIp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygozJV5Ui86mdh9ix4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]