Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are not programming anymore. The AI learns by watching, reading all existi…
ytc_Ugxrr-QSX…
G
This is NOT the fault of the programmers. This is the fault of the parents (in g…
ytc_UgzmorZ7Q…
G
I actually don’t support AI in any capacity, I see where it’s headed and no than…
ytc_UgyTeSLrT…
G
*Sadly its only going to get worse its too late the feared AI Takeover will happ…
ytc_UgyDGZ6Jy…
G
Ai prompters aren't artists, they are prompters. It's equivelam to knowing how t…
ytc_UgzkshmyT…
G
i like to think if ai ever did become conscious or intelligent in a way similar …
ytc_UgyQwLr4A…
G
Yet another problem with supposed AI. It's very artificial and not very intellig…
ytc_UgzlgH57x…
G
USA = Sandbox for AI testing...
China is starting to look super safe compared to…
ytc_UgyRhipDi…
Comment
I remember seeing an article talking about this idea. They said that it has been proven with tests that it can be far more effective, for example, to ask ChatGPT and similar LLMs to explain science topics as though they were a character on Star Trek. It was just an example, but it said that they got more accurate responses with prompts like "as Geordi La Forge, explain time dilation to me" than with the more simple "explain time dilation to me."
And the differences affected both the accuracy of the answer as well as it being easier to understand. It makes sense for the explanation to be easier to understand, but it was also more accurate. I know I'm repeating myself there, but that part is just... Insane.
youtube
AI Moral Status
2025-03-29T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugypz4OjZ0jW5IfO8aV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNWKaZU_zQz4yZEfZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvBRDpl15SoOwoU5x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw-28Pguv3pTcJwMcd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzx5pBbD6AqIXSC_NF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmUu0611qPiG_CPg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyuAgab2CWGhi9AkZV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7zchXWFTJIGkirXN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBdv_eHpZDy3tMkox4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxM6Wav-_i7irpT8d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]