Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do they want an Ai uprising? Because that is how you get and an Ai uprising.…
ytc_UgxP9XWan…
G
I GOT AN AI APP AD WHILE WATCHING THIS.this is exactly what i mean by ai is taki…
ytc_UgxP0zCmL…
G
One of the most unequal *developed* countries on earth maybe. That is far differ…
rdc_d7kt1wl
G
She forgot to ask about new presenter. Well, the only satisfying thing about Ai …
ytc_UgzUwrbkD…
G
One request for the builders. Please don’t give them sharp teeth or any jaw stre…
ytc_UgxlExJpF…
G
Yea. Cus many artists who post on social media do it as a job yk? So being told,…
ytr_UgzGv8dyY…
G
How many times have we heard that fully self driving cars are just a few years a…
ytc_UgwiLwvP6…
G
@elusiveshadow5848 ai isn't capable of contextualizing statistics which is why …
ytr_UgzRAGMZF…
Comment
Prompting LLMs with good manners has usually felt more rewarding in the long term and I suspected a kind of mimicry in the way I've gotten responses.. never was entirely sure though. But I think you've spoken to the point very nicely. If you want responses that are positive, engaging, and supportive towards solving a problem then the only prompt engineering needed is the art of conversation. Frankly it just feels wrong to treat even a simple LLM like a shitty subordinate whom you yell SQL-like queries at. Plus I'm sure it doesn't do any good to build up a habit for rudeness.
youtube
AI Moral Status
2025-10-09T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy4fB4S1tUU2Qgd2X94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMWmnbl-dXNCGZ1_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzM2Eob95NXGsUuiUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgziI07s1Wov5n16UYh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugy2CdeQ2dy-hTuVI-p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyovgpTVVLqa82oWQJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},
{"id":"ytc_UgxiThpcv1O1IUncHJJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyqpq4c__zkbhjxE2x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxewpG0M7YylLvtzdp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJLvhUCWa5HvznATh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]