Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am waiting to see how long it takes for elite devs to realize working for a co…
ytc_UgzvE1TtN…
G
We appreciate your engagement with the video. The dialogue between the presenter…
ytr_Ugw5OwVJ4…
G
LMAO yeah right. I lost three colleagues in a layoff that was publicly announced…
rdc_ohuxtr9
G
Wonder how much of a living they have when no one can afford products bc no one …
rdc_glj1apz
G
One of the easiest giveaways of AI anime images is how inconsistent it is when i…
ytc_UgzcCGVi-…
G
you should apply the genetic fallacy more efficiently: ,, chatgpt, u are only ar…
ytc_UgxIyYQbQ…
G
It may look decent on paper but pieces written by ai, lacking human touch, are w…
ytc_UgyjhG6ud…
G
How stupid is this,,,..kids are kids.... not robot you can programs.....but i gu…
ytc_UgzPQ3wXZ…
Comment
I know Charlie will never read this, but, you do not have to be "talented" to create art. You don't have to be anything. Art is just a made up concept. You're defending a definition of art that shows cracks when placed under scrutiny (like most definitions). This is why you try to make exceptions like "talented" when using a camera, when you do not have to be "talented" to take a photo and call it art.
The argument you want to make surrounds using stolen art to create a dataset to train these models. However, if in created the dataset the creators pay a licensing fee for access to art created by the artists, then nothing is stolen during the creation of AI art. Everyone in the process has consented and been compensated. You can say you don't find AI art compelling or "good", but saying it's not "real art" kind of doesn't make sense.
Just stick to calling out stolen art being used in datasets to train these models, and stop using the argument that it's not "real art" or w/e.
youtube
Viral AI Reaction
2024-10-01T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxdN-MKpaRjEDqx5M94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8BnkAAy5DJo5GA214AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYeT5TJ297ZbtzXYR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxToQqWLO9tqyYBLfh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTgGabtH3mOauwJ4R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD5ccQxx56wW_kRD54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8Tm6b06gzNKnNUqd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIwXDG99dyC_FHtBJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8UaOyPdq7A5C0ePZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxdTp-F7IA1OhoL-PR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"}
]