Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My work function like this:
My client comes to me and gives me a long text th…
ytc_UgxE7aeI1…
G
It is like we, as humans, feel obligated in making things to kill ourselves. A.…
ytc_Ugy9yMHum…
G
Basically we will completely rely on AI / Tech, lose our own intelligence or com…
ytc_UgzKiIOG5…
G
It's not that AI is killing the value of a college degree, it's that it's replac…
ytc_Ugz8VGQ1w…
G
This is just the start. AI will help to find better materials and more human lik…
ytc_Ugw6xK1G6…
G
Fascinating listen. Interesting that frequent Words mentioned during this conver…
ytc_Ugxp_gBDU…
G
Well, there have been renowned scientists, including Stephen Hawking dedicating …
rdc_nudfh92
G
Is ai artist a thing now? I wish social medias will label what ai arts and not!…
ytc_Ugzai376k…
Comment
The more we put LLMs in charge of things, the more they are going to shape reality into the stories we tell about the future. Some of the relevant stories here are genuine attempts to rationalise what will happen during wars, what the right strategies and costs are. But they are overwhelmingly outnumbered by disaster fantasy.
If your sole purpose in life is to generate text, everything is just a story.
It's going to be a huge irony if AI destroys us because we put it in charge of dangerous things, and it then did the things in the stories we wrote about harm caused by those dangerous things. We are manifesting the plot of a weird scifi novel where we're creating the technology that allows scifi novels to escape into reality.
reddit
AI Jobs
1772192080.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7onl0p","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o7oo3ld","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o7orvpr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_o7psi8q","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o7qaasf","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]