Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Meta just fired 600 AI people, yes in the AI department itself
AI is cooked…
ytc_UgwrrltRE…
G
Would help if your faith was the truth to begin with. Most still follow the Bibl…
ytc_Ugz13qsYE…
G
Also, can't leave it unsaid: The fact that Nate softened the statement of how CG…
ytr_UgyvQRofl…
G
There have been many concerns about this facial recognition software AND this is…
ytc_Ugz_EzfxN…
G
Women won the fight against men gender discrimination, now they will fight the A…
ytc_UgysfQ7k_…
G
Now imagine the damage if we put these LLM into a human shaped robot... 🤯…
ytc_UgzFNdTlX…
G
@Pworld44 if your car was hit by a tesla, and the driver got out and said "omg …
ytr_UgzSbhiHE…
G
Alone from listening to the conversation and the very weird dance around clear a…
ytc_Ugwye3Qip…
Comment
@zigzagintrusion AI won't ever be able to write good (original) fiction. I actually have a list of objective reasons why it's incapable of this.
1. AI is trained on countless other works of fiction and it ends up being a low quality mash-up of stolen elements from other pieces. Thus, it always ends up being generic no matter what concept you try. It's all been done before. It objectively cannot formulate any original ideas of its own. The story conflicts are all stolen and bland, and the concepts are, too.
2. AI simply doesn't know when to stop. It has no sense of subtlety or subtext, which is crucial in fiction. It doesn't know how to mask any meaning beneath the surface because it doesn't have human intuition that will let it realize when the scene is enough by itself to portray the hidden meaning(s). It thinks that themes and tension and conflicts need to be overtly stated to readers because it doesn't know how to cultivate emotional significance through the story alone, which brings me to my third point.
3. Not only is it horrible at subtlety when it comes to themes, tension, and conflict, but it also doesn't know when to stop when it comes to emotions. AI writing is overly dramatic since it doesn't know when the emotional aspect of a scene is enough. And that's because it doesn't feel emotions, so how would it know? It hammers the emotional core of a story so much to the readers and it simply won't ever know how to use subtext for that.
4. Intention is the backbone of every element within good fiction. AI has no desires of its own, and thus, has no intent of its own. And so it simply doesn't know how to create stories for specific purposes, how to tailor the word choice in a story to convey a larger message. They don't know how to make the most out of their story using the setting, characters, and conflicts for the larger purpose of it. Why? Because they simply don't have a larger purpose. They're just commanded to write without reason, without yearning, and it ends up unoriginal/generic at best and horrible at worst.
I know this was a lot, but fiction is a thing I'm passionate about, and I've realized AI has nothing on humans, though I used to worry about it, too. But now I know what I need to know.
youtube
2025-01-24T00:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgwpguX9tVJv4z_f_uB4AaABAg.ADv99XyDKHrADv9IzmoAPL","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz7wG0UihtqvXIsrK54AaABAg.ADl6_CEovyWADyMMVOLJ0r","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz0K-vQrNmuw5YES-F4AaABAg.ADgVjBXYTSlADh7n_uvRvN","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzuN1zEvuKZX5dCt3l4AaABAg.ADfwG_EiQE9ADl_YqLfeZQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgziCqJiWvK54GVp8BZ4AaABAg.ADeOld9BEmGADeWdmVqA60","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugx484_JmFs4iOoDfjl4AaABAg.ADd7W8fNEfjADg8lGpB2Da","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzpFFXRadFRc5USxcF4AaABAg.ADcstV5M1nKAFhQSDzaIqs","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyGXffqfA5xAzs2TSp4AaABAg.ADca8F0E5dTADclUiWJnUf","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyGXffqfA5xAzs2TSp4AaABAg.ADca8F0E5dTADcpB1gOeKT","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyGXffqfA5xAzs2TSp4AaABAg.ADca8F0E5dTADddI5WjSJ7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]