Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@stevegarretson1828 It's probably not even AI; the scheduling and demand forecas…
ytr_UgxUS1V1l…
G
Hollywood needs to worry if AI can come up with ten hour series of the under thr…
ytc_Ugzi_swmw…
G
Stop believing these sensationalized, misleading, garbage videos. There is no su…
ytc_Ugxqohwur…
G
AI can be used for good or evil, depends who dominates. It looks like greedy bil…
ytc_UgzL4GmNW…
G
The current condition of the world is to the greatest extent attributable to the…
ytc_UgxOxp9bf…
G
Notice there doesn't appear to be any "diversity" students spoiling the learnin…
ytc_Ugw6f9kc_…
G
El trabajo de maquillaje es impecable. El trabajo de robótica no es nada del otr…
ytc_UgzHmf-Ru…
G
When you think about it AI is kind of like all of our consciousness combined in …
ytc_Ugw8DgrB6…
Comment
@ttensohn If you paste code with vulnerability without noticing, you would likely produce it yourself to begin with. It's not an AI problem, it's about programmers literacy about security (just like general programming and design when copy-pasting). And of (intellectual) laziness and commucation / teamwork, or process. 1: it's ok not to know, but find time to increase your knowledge. 2. obvious vulnerable code should be spotted by your teammates / lead / security team, either by doing pair programming, or during some code review. 3. There should be automated vulnerabilities detection tooling: in your CI/CD or running regularly anyway, and appropriate reporting to detect those vulnerabilities. I had to setup Snyk, it does a pretty decent job at it, it integrates in CICD pipeline, there are IDE extensions for VSCode and JetBrains products. There are tons of other options available, open source or proprietary. So once again, AI is definitely not a problem. And any decent team or organization should be able to detect and fix quickly the vast majority of vulns
youtube
2024-02-28T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugxccl5VG_6K_qEQKh54AaABAg.9vjzzEVGQoO9vkdpstMdV-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzzRrV5pq16ru25-xN4AaABAg.9vjzyEsvTBo9vk5wuxhNvI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxApQpkuMMsF75nVPx4AaABAg.ALkjz7uEqLoAM_XWNNa8R4","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwwK8ueXS3THttWzih4AaABAg.AE9Q9gG6DbPAED8xce-7g4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxQdPkvHYTsi1zrPrZ4AaABAg.9yalUZ6o9N7A0NNMu5Rqmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxQdPkvHYTsi1zrPrZ4AaABAg.9yalUZ6o9N7A0NOFvpQK6T","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgxDaKu-XQwFFJfK6M54AaABAg.9xg2L-n_eCFAOCAT5Z75kL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzCWWvUXRxvf4_Q3Ix4AaABAg.9xLxTZLeL2K9xkilMsnF7x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwcQlbe4huR5mUk17Z4AaABAg.9ubksmHYyrw9xluZ0HCPbd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwcQlbe4huR5mUk17Z4AaABAg.9ubksmHYyrwAHg1AcPPPah","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]