Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now 🙄I know that I know. Y'all are NOT! surprised right? I'm about to look up Am…
ytc_Ugw9SBegF…
G
Do you believe you can recognize when you are being controlled by God (if you be…
ytc_UgymqP0Lm…
G
I'm surprised we're still using the term "artist" to people generating images wi…
ytc_UgwKEgt7g…
G
I am beyond words!! 🤦♀️🤦♀️🤦♀️🤷♀️🤷♀️
Wtf came up with such a completely MORO…
ytc_UgzIDyjW2…
G
Her examples seems narratively nit picked because I've seen examples where AI ke…
ytc_UgxH2pLh_…
G
fr tho if it's Jesus were talking about I'm pretty sure we killed him because t…
ytr_UgxFQODZL…
G
@fishpreferred3322 define learning better, ai does the same, no matter how you w…
ytr_UgyFteqBP…
G
You have to treat chatgpt like a genius 3 year old. If you're really specific yo…
ytc_UgyzZRFXy…
Comment
Mimicking a style is not art theft. As a designer myself (even though I'm worried AI will take my job) I totally disagree with the sentiment of it "stealing" art, when it literally does the same thing as humans do. Rephrasing it as "uses data to copy the style" might sound like art theft, but we literally do the same. We study the art and try to copy what they do to accomplish the style of the art. But when AI does it it's somehow an art theft.
P.S. even if I'm worried that it might take my job, I still think it should be developed more. It has an enormous potential and just because some people might use it with bad intentions doesn't make the AI bad. It's the people with malicious intentions. If you think that's enough of a reason, then ban Photoshop, Internet, Cars and 80% or more of the tech we use nowadays, because guess what, all of that can easily be used for malicious purposes.
youtube
AI Responsibility
2023-06-11T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyXmxki0YTdxO4-WNh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz_mvYxjR8lK1yk6Ex4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1fMWQsk422ZmupMt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwue7zH7K6rlObRDH54AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyDj5jIYqrAa1Wk8Wd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxjB9G-6BuVGu0eynx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBWP4qfSIr5RGMH8h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7kTOZMtia5vN-3m94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwO8aPfNWM6ZJs6OpJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqcLxHzXPew2e5RyJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]