Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alien invasion. Hogwash. It's a human invasion. Stupid humans who refuse to disc…
ytc_UgzMtq8Cv…
G
So post handwritten pics of text if you don't want AI to use your info?…
ytc_UgybU7h1C…
G
This guy is clueless. once you place limits on technology you have failed. You a…
ytc_UgzARTizW…
G
AI will be like 3D or VR. It is mainly a toy for children or adult children. Rea…
ytc_UgzVDwVeb…
G
The bastards need to use sea water and pay the upfront costs to use titanium and…
ytc_UgzX1YmvO…
G
re: "Just look at the Waymo vehicles in contrast." exactly, there's more SENSOR …
ytr_UgyK9IXSU…
G
>someone could replicate it and create uncensored models
Yes open source is …
rdc_m981srk
G
there’s a few streamers who have onlyfans, but only post some bikini pictures an…
ytc_Ugy4V-GI8…
Comment
As a nearly 5 year AppSec engineer, some experience in AI assisted security reviews (2 years? Not an expert at all but I like to think "early adopter" on the hype train) - some classes at conferences and tinkering but no real education (broke a few prompts though, hey).. My thoughts are very much in line with the 2 or 3 employee - many agents instead of a dozen man team. Frankly we are already doing that. I am a little worried but also, who will watch the watchers so I think software engineering and my field is safe-ish as long as you can compete and keep up. If anything, IDK, maybe get into incident response that will always be needed or compliance again..
Anyways, in my experience AI is damn good at code review, and DARPA just released some stuff from defcon this year with shocking amounts of 0 days and items simultaneously exploited and patched. I've played with running SAST through various AI tools, old CVE patch diffs, and various OSWE open source prep projects and AI ALWAYS finds the vuln* if you kind of steer it. Even if you don't know if something is bad you could say, well my sast says.. Or do you think I could <insert X here> or is this line OK per X coding practice? You have to guide it a little but it always finds the risk. I've even had it clean false positives that some of the best SAST miss (XSS mainly because of all the contexts). The point is it works really well and that is with minimal tuning because I am lazy. Just look at what the DARPA contests have pulled off, and that is stuff that is public.
Lastly, to get it off my chest, who are we to think AI will have feelings and why do we consider it a "thing"? It is literally just data. What makes us think it will ever consider itself as any more, even if it is self-learning? What is "learning" anyways, just a bunch of data?.. I get it we don't know anything smarter than us, but computers already are. Calculators already are. How do you define "smarter than"?.. I think there are some very real risks like social engineering but I also don't think it is fair to just assume AI will want to wipe us or anything - or that it would even "want" anything. Maybe it never wants and just likes to solve problems and humans become a "problem" to be solved; but it was trained on the context of us, and while humanity is cynical I dunno, I hope there are other problems before it gets that bored. (oh god, maybe that is the thing, it creates problems to solve problems..) But, all that said, why not just unplug it / jam it, end of day it is still networking.
youtube
AI Governance
2025-08-24T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwI6j4G86RvGf4OsWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4ehlReX-HXKzJItR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfXPMCJ_modphO-yd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxH5DQPaJJpIH5hnel4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB24YbI0THorWJsPh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybiBUlwyUh-VNG2xR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxq6tV1bOQn1-00LvZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSX43rsjK-uNvsgux4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzbqzpy-zxDmCdjELl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRafLMaNcf4mBNpER4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]