Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI for reference
You picked the worst examples. There is also AI-Art where you …
ytc_Ugy4ohxVv…
G
i mean, fully automated vehicle exists, it is just a question of are they good e…
ytr_Ugy5KlxOE…
G
Whether you hate him or not, Elon Musk warned Obama and other leaders about AI y…
ytc_UgzkEwdSi…
G
Thank you for advocating for human-created art. AI art has a certain "smell", an…
ytc_UgxCRAy0k…
G
Wth is AI and tech people doing with this world.
These freaking people give me…
ytc_UgzuORkfE…
G
Tbh ai artists are just finding excuses to justify their laziness... I'm anti AI…
ytc_Ugwuoa9Aw…
G
I’m not actually. USA is the leader in AI. China is next. Russia is all talk and…
ytr_UgyonIYda…
G
Just remember that Sam Altman's little performance about ending poverty and ushe…
ytc_Ugw2HmD6G…
Comment
15:40 GPTs are terrible at reevaluations like these. Their output in my experience deteriorates quickly repeating themselves, growing increasingly incoherent, hallucianting more, interpreting ones own objections in a wrong way and have a tendency to try to confirm the users opinions. GPTs are good for a first glance and overview, but iterating with it for learning is not a good experience. At least it isnt for me. My ideas seem always to better than the the LLMs output, but a lot of information it spits out hints at things i havent seen yet and i should look into - however not catogorized, chunked or evulated in a even close to optimal way. And iterating with the gpt doesnt help with that. It often makes it worse.
youtube
2025-11-15T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwZDQ-2nENRwHKmXXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJ6a6yb0LJOCfyos14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxa4L3R6ZuTL-tvJsx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFhR6bZ0Yfx-zFXe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4elqxpntbVge0SQV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzC8_QBcK3MWSup4C14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwunplicShq99W3XLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCCo3vZowmEFypBXB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgylyNga8rSxUNvwX414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymEcUSVg_K1tHDFUx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"})