Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can the ai do all the work and we just chill? Follow our passions and do researc…
ytc_Ugxb2NQzi…
G
The thing that most people don't understand about how science works is that ther…
ytc_Ugyoh-Wf-…
G
59:52 OMG!!!. This is super.....intelligence. it's beyond imagination of human b…
ytc_UgxHNnsXG…
G
On March 27, when Adam shared that he was contemplating leaving a noose in his r…
rdc_narp7i0
G
The problem is these ai machines are language models trained from real world dat…
ytc_UgycixXcr…
G
Haha, I'm always polite to my AI's exactly so I might be spared, post singularit…
ytc_UgypmtdhU…
G
>We can't even get it to work for answering basic questions about documents w…
rdc_n7zklv5
G
If you’re having an AI write the book for you you are not an author. Just like i…
ytc_UgyeThXOa…
Comment
In my opinion there is so much wrong interpretation in those studies leading to think AI code is the problem. The issue is how devs use it. As a full stack dev with more than 25 years of experience I can comfortably say, that due to AI my code is better structured, documented and handles error way more detailed, and I finish at least twice as many features as before.
If you use AI to develop thing you could do yourself it works. If you use AI do code things you are not capable it fails. It is like a child trys to explain a worker how he has to build a house and the worker trys to follow.
youtube
AI Jobs
2026-02-13T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3BY_TmHrIqwJ9Zg54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxotAP4a4xz61Ki0Dd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkT5Og5_Ld9Ze-OjJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1a0seTrrr0t6J-8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQiqZn7nk7FXwcztB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0pnDU1ekVlLSSw-Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_yLmFPmXBcJ98VI94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoqXq0jNYwzcbjNf54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQDZyO1uW5gW6BLU14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjXvsgsaH47udGLcZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]