Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Current A.I. methodology and technology are still just toasters. Making the mode…
ytc_UgzlT0d2H…
G
1:27 Was the driver of the Dump Truck high on something? That looks very inten…
ytc_UgwA--z0W…
G
One is to wear hi vis clothing while riding. It might not help the AI system to …
ytc_UgwDXYK0a…
G
Art is a skill, AI is a tool, MASSIVE DIFFERENCE!!!!!
To those discouraging oth…
ytc_UgwkeN2Qy…
G
I wonder if that guy will have any bias in doing a deep dive into the A.I's impo…
ytr_UgzwVjNGW…
G
why dodge when you can check? Winston AI is solid at spotting AI-generated conte…
ytc_UgzUmfYaI…
G
Colin Richardson : This lady was right in front of the Car , She should have nev…
ytr_Ugwjf_iNe…
G
@muai AI art will fail regardless. If everyone can "make" ai art, actual real ar…
ytr_Ugxt5SRJY…
Comment
It's sad to see a scientist come up with such a baseless piece of "information". LLMs are trained on large bodies of text about specific subjects. All they do is find the right word sequence (i.e. sentence structure) based on the sequence of words inside your question/prompt and based on what word sequences it can find "similar" to what you're asking. So adding phrases like "please, hellp, if cou can, etc" is simply going to have a negative impact on the quality of responses, because unless you're asking a question about the rules of greetings in a certain language or how mannered people behave/talk, those phrases do not exist in specialist bodies of texts that are used for training LLMs. How many reference documents on Law, Art, Architecture Engineering, etc you can find that have words like "please, hello, thank you, if you could, etc" in them? By including those phrases you're making it harder for LLM to find a sequence of words "similar" to the core of what you're asking.
youtube
AI Moral Status
2025-08-16T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwfEoXF1cyZOu2lUsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwv63Qi6vwHyfzXhyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx917xU39bc-rBI6il4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyjSlKYRPTmoUAS5CJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIinkSFkjxJiHfQhV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzhA3wqO5ClK3uhyt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzvoLDQVk60yE_z05R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-TElpeDHA8RVAIZJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoJCcxdxMuaxNFy1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZ3X3iSPm2Y9HjDkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}]