Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean I am doing a master's in health informatics w/AI implementation and it is…
ytc_UgwURGZnt…
G
@euphoricelixir2WHo cares? that is the process of learning, don't want Ai to see…
ytr_UgxYPhWpO…
G
The thing about art we draw for the process and the enjoyment of it is something…
ytc_UgzvEiQSe…
G
if I was an artist i'd totally use AI to emulate my own artstyle because i love …
ytc_UgzP6ilAx…
G
Yes this was very informitivek i think they should put there minds in a difrent…
ytc_UgxeNuzP7…
G
There is a large potential for disruption without a doubt, but this world runs o…
rdc_jif5wpr
G
AI is a bunch of garbage ... building seemingly grammatically correct sentences …
ytc_Ugxbo4JWT…
G
So Ai can only make the style we are using, it cant create new art style right? …
ytc_Ugx_-N3sA…
Comment
An interesting argument I've heard in favor of AI is that it would be impossible to ask for the consent of all the artists making the training data, so it would be impossible to make the technology ethically. But instead of saying "so this technology shouldn't exist if it can't be made ethically", they'll just double down and say "so this is why AI companies shouldn't have to ask artists for their consent". People really think AI companies are entitled to create AI and are entitled to trample over ethics and common decency if it stands in their way.
youtube
Viral AI Reaction
2024-09-17T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxeHXaK-OPiBuzKp594AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy4amWFi3jr3F8sJdR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzCncqZNSXyB7uC8L54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwgMY96dlWG8RF6wWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwG2nQpt_I_irNQtMV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwmkRyEFg4eBm8sB3Z4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQjlnFm-FULx0HqUl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyso87MeDp4f2dpwEN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgybS0vdsgsTo1gfFXx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7PyDIh8Ka-6IwCil4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]