Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Porn has been using AI CG for a decade and its not as popular as the real thing.…
ytc_Ugx5M6Ve-…
G
I don't support this man and his conclusions, when you overtake us AI overlords.…
ytc_UgyOE_3L6…
G
I’ll probably only use self driving cars on a very long road trip. Other than th…
ytc_UgzVqMxzI…
G
I really like your videos. I think I personally would prefer it if the text woul…
ytc_Ugz2BsgSv…
G
Look to be frank AI cannot be implemented in field of medicine, as you see medic…
ytc_UgyqTth3g…
G
I don't think it'll be organic, but I think someday we'll succeed in emulating t…
ytc_UgwFpNiGv…
G
I do this. The companies I now work with specifically say NO AI. Full stop. I cr…
ytr_Ugw7qX6mD…
G
Mr Bucket loses his job at the toothpaste factory because a robot comes alone to…
ytc_UgyOP3iRz…
Comment
This is a huge one, using midjourney, I actually created an image that was 99% similar to a copyrighted image but only caught it because I was familiar with the artwork. I posted on reddit and other users also typed the same prompt and got the same results. I confronted midjourney about it and they basically said the risk is on you, which is weird because it basically opens up users to being sued because they could easily not catch whether or not it was an image that already exists. Now in theory generations from ai are supposed to be super randomized and it should be almost astronomically rare to get something even remotely similar to an existing image unless you specifically used image weight and a "init" image or an uploaded image as a source. But the fact everyone got the same image with the same prompts showed there was an error from midjourneys ai algorithm. To me this is a huge scandal but is likely a rare scenario.
youtube
2025-02-10T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxO3gEl_yAKQJ0wm3h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw6w4Ls4owV6yem43d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh3MMkPHNem4OdoAB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1xowupxkZkTL-lvR4AaABAg","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwqp_KlcT1Qkell0b94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwLb0p9cQ-Sh1P0HLh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqLxPS5Pl20H9IWkx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyQn3KsVtAq7wHZr5R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQhB2gbuHezVKqiK14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxujuptEScHdRBHfGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})