Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm for it if it's gonna benefit the student. I noticed that many people comment…
ytc_Ugw5b0zWU…
G
Watching an AI moderate a debate between atheists and believers is peak 21st cen…
ytc_UgynBKQCi…
G
The only thing I do with AI is fix my loneliness and talk with them about things…
ytc_UgxZz_ZR8…
G
Honestly man, it doesn't even matter what career you choose anymore. When AI sta…
ytc_UgxaXhXw6…
G
AI art is a weird one because I think there are a lot of non traditional skill s…
ytc_UgzGp6l87…
G
Using the flawed philosophy of "2 wrongs make a right" & justifying any ills or …
ytc_UgzBQHWZr…
G
Didn't he specify at the start that chatgpt needs to take the opposite side? It …
ytc_Ugxcn6xyN…
G
Thats the US government now! Theyve been doing it for decades. AI is just anothe…
ytc_Ugy5Epz1E…
Comment
There are fairly easy fixes that can be made to make a sustainable project if you utilize the rules/memory function:
1. Tell it the basic structure of the app
2. Tell it the architecture style you want it to follow
3. Tell it that every code change needs a corresponding test with 100% coverage. This steps allows you to make much more confident refactors because it prevents accidental deletions.
4. Every time it does something weird, undo the changes and tell it to store to its rules to never do that thing again. After some time, you start to forget you had to tell it not to do things and then you can start moving those rules from project to project.
5. Whenever you and the ai both get stuck going down rabbit holes that finally lands on a solution, tell it to store what it learned so it skips the rabbit holes in the future.
I can’t tell if any of the commenters are looking for a solution or just wanting to believe AI is not as good as it is, but I thought I’d add this for anyone looking for a solution.
youtube
AI Jobs
2026-01-19T19:3…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyu06ySTxitcSgmJS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwveSlcyP4HVPPIZqh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzBy7HWZxd7nVUF1xN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzBJNJ4j95F_4YY1Xl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwpBRuULCP1DPmgjQt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIh8QgxDm1TBRrD3N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyCu4_WSNNzeIZ4HnJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyV1N5OLaGqJCd3XbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwP9rbjjP9UjEOm0_d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx5ePGK3L5PhLY1y_d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}]