Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everything he says about AI has merit. Unfortunately his ideas are all anti-bus…
ytc_Ugz9AhnGd…
G
All this power going towards AI learning… Why don't they use that AI to design a…
ytc_Ugx6Otfiy…
G
Care to elaborate why you don’t think AI would deserve basic rights when they ev…
ytr_UgzuR-nEu…
G
Why would someone let a metal object fight a human? Why would human voluntarily …
ytc_Ugyb3nDlI…
G
Go get a job at the robot factory or become a robot repair man, maybe start a ro…
ytc_UgwnbdO6t…
G
You can simply look at REAL LIFE TODAY - and the profoundly absurd ascendance an…
ytc_UgxWPlDhW…
G
Remember when people started using ChatGPT in its initial days to automatically …
ytr_Ugyaon7nU…
G
Why not use the AI to do good things for us.,like health care , law , well being…
ytc_Ugz0A-c3L…
Comment
I will get heat for saying it, but you really shouldn't use any of the "poison pill" software on your art. Here's why... In the United States, setting booby traps, especially those intended to cause harm, is generally illegal and can lead to criminal charges and civil liability. I recommended and continue to believe that that artists and AI developers should establish an META Data standard for "no ai" that would make it possible for content creators to tag their work to exclude it from being scraping in the same way robots.txt file protects against search bots. This creates a "no trespassing" legal standard that would allow creators to sue AI companies that scrape their content. Let's be real here, AI art is here to stay. So, finding a way to set reasonable boundaries on how it is trained and used is necessary.
youtube
Viral AI Reaction
2025-04-01T11:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzf0p6GfimWMJ8stJ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTUuQYqAbE_X5LK_J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyrx1QbSxtAbgN1jCd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw8VJ4SIJKULUTSSs54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxpfOdevtSV4QOIfcN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzLOZW-cW6QYzjM6jF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1v7cTUpmChVEpKuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGv6onqBFUHyaqAPJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzi61CMQ71U0YRvtLN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwkkXD_wFhZx8BM8yp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]