Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats pretty much how the IDF predicts who its going to murder by getting their …
ytc_UgzXe3xt6…
G
AI will be the end of the human race. Has anybody heard about the robots that we…
ytc_UgzX4P-T6…
G
Even the technology experts selling these AI systems are clear about eventual lo…
ytc_UgwFUd2Xl…
G
The photo of her is honestly inappropriate.... ofcourse your child will have pro…
ytc_UgzWMlQ7X…
G
The problem is, most people using generative AI are not using it as a "tool" lik…
ytc_UgwdZx9uT…
G
You touched on the pitfall in this video. An LLM is trained to pick the best-fit…
ytc_UgzACn8RX…
G
People keep saying "oh the driver should have known" or a "better driver would h…
ytc_Ugw_CFZRH…
G
Once they have AI and automation, they will no longer need slaves. What do you t…
ytc_UgwRpX-BN…
Comment
I’ve tried making some basic react apps with the earlier versions of ChatGPT, Claude, and Gemini and my experience was getting into a lot of circular bugs/errors that the LLM couldn’t solve. Fast forward to today and tools like Cursor and Claude 3.5 sonnet have definitely improved. But to your point we are quite a way out. I basically have to babysit the LLM and give it very specific prompts to see any usable results. Definitely interesting to see the progress but don’t think we’re replacing us devs especially in more complex/larger projects anytime soon
youtube
AI Jobs
2025-03-08T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy5QtoZPL3YtqJyEkN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyeigm13ArDgwC8yiN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTyKaGL3DYymPOFv94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwp_END8zTZ2T972rV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmRhrMBz8c4MheJpV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwr5dmQ8mHLlg8ZOgR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF6UwnoOEtdQD4aTR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydFDH7_yatCYaVrel4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgynXOBY0tFyIhWDNK14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiY9AIIxrDmKQUYV54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]