Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did you know that OpenAI actually loses money on every person that has a premium…
ytc_UgxXIr1eT…
G
One thing that all those AI-folks are missing is that if artists stop producing …
ytc_UgyTqB8U7…
G
The irony and karma is so poetic:
Labor jobs got made obsolete by machine automa…
ytc_UgwiBwP-N…
G
fuck yet those who create AI ay it is dangerous to humanity and keep continue in…
ytc_UgyIZlsfY…
G
If we really create AI we have to respect the Idea of a adult with a childs mind…
ytc_UgxmAM5h5…
G
Blaming AI for this moron only listening to the part of the information he wante…
ytc_UgyLtX0GL…
G
The comparison of cars replacing horse-drawn carriages is flawed. Cars are actua…
ytc_Ugw8g3cn9…
G
Not Just Bikes has a really excellent, and scary, video about self driving cars.…
ytc_UgxFvT5ZX…
Comment
Wrote my college thesis on creative bounds of large language models. Factor out all the ethical/credit/ownership concerns of training data and there are still problems with using LLMs even for brainstorming or idea generation; these models are regressive and while they may be able to recreate relations they gather from training data to be able to come up with great new combinations of things (one branch of human creativity), they will fundamentally not innovate beyond their conceptual space without a human prompter guiding them to do so (unless they start hallucinating, which is just irrelevant to consider as this is not intended behavior). Afraid if LLM brainstorming and drafting becomes commonplace, and people habitually replace certain critical thinking tasks with LLMs, we will see a serious plateau of originality in creative work.
youtube
2025-06-25T21:2…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyox62oyShRul0AuD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOyDPHlHBUksZuCE54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgysIHfoXfHHMvZF9n94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHe-h5JBEdKSRwj2V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwp4IEz-1H9YpJyFVZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfRYBaKgqMV615XDh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxe_Apbk8-kVSdmSAt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxS933DF-POftT6f-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgywgBmWPzlN8spbEeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_JztcQAEqapkq61F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]