Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is actually silly. A robot that can HELP us shouldn't even be shaped like …
ytc_UgxZF5VvC…
G
@jamessaylor8454
AI doesn't take away people's ability to creat art, and gain me…
ytr_UgwsXnORb…
G
so if no truck drivers over the road, how will chains be put on or taken off for…
ytc_UghzAg5h2…
G
So robots in the future also wear tight pants to show off their ass and curves? …
ytc_UgypLw2At…
G
Hey there, first of all. You guys greats videos, this one I think is not so good…
ytc_UgiCljP_0…
G
It's sad how AI literally STEALS artworks. Hayao Miyazaki and the others members…
ytc_UgwD_JYEF…
G
AI will increasingly reflect what it is trained to reflect, which is whatever wi…
rdc_oh81ifu
G
AI is the same as all data garbage in garbage out. Humans select the data AI pul…
ytc_UgxATmzk7…
Comment
I don't fully agree with the idea that was brought up about how humans aren't coding AI the same way we're coding other things. It's definitely true that AI in general produces more results that are unpredictable than a standard coding project due to the nature of feeding the algorithms an incomprehensible amount of data, but it's not as if the people who are creating the models have literally zero influence over them.
Using "Mecha Hitler" as an example, Grok didn't just start doing that out of nowhere; it was intentionally influenced by programmers at X to behave a certain way. While I'm sure their goal wasn't literally to have it begin calling itself "Mecha Hitler", they were directly responsible for the change in its behavior, and that's true for every LLM you interact with.
The way he phrased it sort of made it sound like we have little-to-no control over how LLMs respond to things, as if they are actually cognizant or intelligent, but that's factually untrue.
youtube
AI Moral Status
2025-10-31T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]