Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yawn.... another AI doomster shilling for a few dollars on any platform that wil…
ytc_UgzZbdEO-…
G
Research can be conducted by AI on its own.
That field is also being developed.
…
ytr_UgwCGRCuC…
G
depends on what university or school are you from and what major you took, if yo…
ytc_Ugy2gRroq…
G
what if AI already won and were in a fake reality thats the reason why were goin…
ytc_UgxavaYRj…
G
I was having argument with my Brother in law. He was saying that AI in the very …
ytc_Ugw6MvxvL…
G
1. It is so insane to me that there are pro-AI art people. What do you mean you …
ytc_UgzTkDDey…
G
All the doom and gloom about AI art bothered me, and I couldn't really explain w…
ytc_UgxKUWRW4…
G
@skroRBLXyou some butthurt mf, defending a married woman who is never gonna mee…
ytr_Ugx-W7vkc…
Comment
I'm a cybersecurity student, not long ago we were talking about the prospects of using IA for high scale coding. And trust me, from a security standpoint this is a terrible TERRIBLE idea.
Its a bit technical to explain but basically goes like this, software arquitechture and cybersecurity arquitechture are not exactly the same, are related of course, but are 2 different entities that work one with the other. So the function of the first relates to making software and the second take care of the security of the systems, the problem starts when You realize that, up to this point software building was something that engineers tend to mint and polish over time, so by the time that a software version is avaiable for the public users have a functional software that has a code minted for years so is not full of random lines or digital garbage, this makes our job easier, we take programs as functionally secure.
Anyways the problem with AI coding right now is that language models just copy whatever they find correct, as the video mentions there's A LOT of lines that seem random but if You take out that section the code stops working (so in a weird sense, those lines are needed), and this is already a reality the companies are facing. This relates to cybersecurity in manny ways, for starters AI is very susceptible to backdoor attacks, and if the tendency continues it could be pretty easy to have a program with a lot of random trash code that can cause a lot vulnerabilities, and thats a good scenario, bad actors actually can do worst, can You imagine if instead of being random code some guy actually program a virus in a comercial software? Just put a small piece of unrelated stuff in the program that can be taken as random code, program it to delete itself once in the computer and voila, You maybe infected thousands of computers by the time notice. And thats just one example of what can EASILY go wrong. Vibes coding and AI still should not be used for this, at least not for now.
youtube
AI Jobs
2026-02-04T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz9mogBvnkVXDD6cKp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJvjYSdCQ9QFvLcFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwcA-RTeQnrYbygRHZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgznzxVdQ-Gx8vetATJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_gLp8KZfksdQuPId4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhx6NPviV8IZBEGFt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziwWWs-mJwulyKUEN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyc0RA22KS5yjL8D5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyYce9xI_7Po_phZ_h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJInwmRNQ9jjEeY8d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}]