Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI and robotics could easily do the work of doctors and nurses but they're not s…
ytc_Ugwzxvypt…
G
what happens if and when these AI's become self aware? What happens when this te…
ytc_UgiWIsslZ…
G
I’m broke and live in a very cramped apartment markers and other coloring tools …
ytc_UgziyypoA…
G
@jasetheace37 yes, banning deepfake apps. Like apps that can put someone's else …
ytr_Ugz4YcyS6…
G
He is making ai robot cause ones start can't be stop if it gets to a wrong group…
ytc_UgzYYqaUC…
G
@KyllyanAteme That's not really why people support artists though. An AI artist …
ytr_UgxWSNDVh…
G
He mentioned wanting human nurses, but if the cost at the AI hospital is half th…
ytc_UgwbiLoaM…
G
Sounds like an AI narrated this video. I am sorry if its not. Problem now adays …
ytc_UgzD3XnQO…
Comment
Watching this a few months after the fact, but the end of this video reminds me very much of how I’m using an LLM in making a game in Godot. I know what I want the game to be, I know how I want it to work, I just need the GDScript syntax (which I’m not even remotely fluent in) to make it happen. I give the LLM a very specific set of parameters for what I want a small segment of code to do, I get that from the LLM, and then I test the everloving bajeezus out of it to make sure it isn’t a buggy pile of vibe code.
youtube
2026-04-11T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGTdRomg7KlD1xthZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLo87t4FuFQ2iRyJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-AvpdhkAdVvgdJPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiQ8gj2p2g3TC7yft4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzNTvX_YGbTKFd_L6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1lbd1knfL54ut5cZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1jmTdiFkJrCgaoSJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiJa0GFsdkmn-3EhZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOxVfyOxgeXfXB7WF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOX2R4_wlq8AiZMKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]