Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do agree that using AI to solve an entire task (such as "write my essay") is w…
ytr_UgzbJqs2q…
G
What is coming behind it is massive success for companies with serious, meaningf…
rdc_nk6peug
G
The real title: samdoesarts jealous that ai draws better than him
(Edit: i do un…
ytc_UgysBgNPB…
G
Flawed logic, but not on Tesla's part. FortNine makes the spurious analogy to tw…
ytc_UgwoIdAlg…
G
I think AI should really be used as a fast and iterable tool to build upon your …
ytc_UgydZEIC8…
G
Choose one of those and let's see which will be truth: Terminator, Walle, EX Mac…
ytc_UgzSu1Zp7…
G
Well, in case of udio, you´re not just writing a prompt but you are also curatin…
ytc_Ugy_1yPQ1…
G
When you train AI on human data, they will adopt both the good and the bad. When…
ytc_Ugzj4hC57…
Comment
The statement that programmers using AI are 35% more productive is exagerated and misleading. Multiple studies have shown that the big productivity boost happens only during the initial phase/boilerplate but tends to 0% the bigger the project becomes and even causes productivity issues.
Also, we have studies showing that people using AI constantly face cognitive collapse in the short term. So it will invetibly lead to less productivity because you are becoming more stupid the more you use it.
youtube
AI Jobs
2025-12-16T09:4…
♥ 271
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwc5mxAR3U1qu6xp0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzzCW6HcChCc94fQJl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5aj_2OVBzxGvbEXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCjjHmlfTIXDnKywx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx1y02mA7YFgmj4HWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxD6FyMngs4Tnvc3CR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwvcPh30jtiNQJzBhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzpyXgqLyIAI9Gj5id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUyrdzcymAcYcxe0Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7CmPYGtMSU_2Hj4t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]