Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The smart people know we have to give up control to get control, otherwise you w…
ytc_UgwKaSHHK…
G
That bozo is nothing more than a disgraceful hack. He didn’t give the whole stor…
ytc_UgzstRO6h…
G
Whoever’s making these AI videos is doing it to not just black women, but white …
ytc_Ugy2YSS1W…
G
"you need to properly instruct them on frameworks, libraries and structure"
yeah…
ytr_Ugx3UBh1P…
G
Humans will always be of some form of use to computers. Humans function complete…
ytc_Ugyf9Cght…
G
They act like it’s hard to unplug it from the wall. 🙄 Ai is trained and programm…
ytc_UgwqMsPmS…
G
Meh. There are people who are genuinely very inventive with art and AI; people w…
ytc_UgwUI5prj…
G
Did your opinion change since this episode? Mine did. It starting to look more a…
ytc_UgxpXiNPc…
Comment
we can also have some positive things with respect to security from AI if we actually bother to try. I asked ChatGPT to generate a simple web server in C for me and it made one riddled with vulnerabilities. but then I asked it to check for security vulnerabilities and it found basically all that I found as well as a few that I missed. (one of the bugs it said “could potentially be an issue in more complex scenarios”, while it was already an issue here)
then I asked it to translate the code to Rust, which it did successfully. that didn’t fix any vulnerabilities (except the buffer overflows that it had already fixed after the first querying), but it’s always nice to have a rust version
anyways, what i’m trying to say is that we should have a security copilot that works as an extra pair of (robot) eyes on the code rather than (just) the current anti-security copilot that reduces the number of eyes on the code. there are many kinds of vulnerabilities that AI would be bad at spotting, but a vast majority of vulnerabilities are trivial ones that anyone could spot if they bothered to look for them
youtube
2024-08-04T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzzQkykOPl3MEdZyod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxeykeHqnCUYgbhnN14AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw2gEiHv-8hDsbikk54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUHt6JU95oh-JciEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXUy_SKGt8GLb-SBd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmhG4Cb6SVOMcQvd94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgysfQOMSiehxEfpCnx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxx-N8OIfLfLHizn494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMfS_Tye09sN3vojp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXUcxjoFyeqmxACHJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})