Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was a surprisingly refreshing interview. Respectful and informative discour…
ytc_UgzdLQo0G…
G
What ever happened to human communication? Is AI taking 😮 relationship between u…
ytc_UgxpK027M…
G
came here after i said a name to chatgpt and it intentionally said it wrong, and…
ytc_Ugznh62G4…
G
The robots are better dressed than the host. I was convinced clear back in the l…
ytc_UgxxLCYEV…
G
also like if u NEED a ai ref that means ur not looking for a ref of a sertan thi…
ytc_UgxuiCyu9…
G
@trevorchester4439 don't buy what? Ai art? You know anyone can make one themselv…
ytr_UgzWOFhcJ…
G
Idk if I agree entirely that Pollock is bad or has no artistic value, but I defi…
ytc_UgyFE5Y7r…
G
I'm not suggesting your experience isn't 100% accurate; but I will say that, for…
rdc_oi1jjem
Comment
I can partially agree with this. problem is that any filler words are just adding to the context window and add to cognitive overload. especially if your response is only filler words, like responding with "please" or "thank you". in that case the AI will respond will filler words also. and thats 2 prompts in the context window that are now useless and adding to cognitive overload. besides that I agree that its a good idea to manipulate ai to act in the way you need, as long as you aren't forcing it to respond a certain way and bypass the knowledge it has that may surpass our own knowledge.
but im coming from a very technical use case, such as software engineering or researching important data.
youtube
AI Moral Status
2026-03-13T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx52u0MjaJgBgwz5ZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzArkAonqzMGHdvQd54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyg_yYXhNfGEAGzwn14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwldolSgpmENyJn9el4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwgRhXhPwoGiLCu-YF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQ03Zk68LnYS8wr8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwycZZFwFPBm-TEh754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1NSUvAcEqPrqUQyN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLhHQrzECI7WcdgiF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAtZNXDkBXg0xVLSh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]