Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It strikes me as funny the asmongold is so for AI artwork and convinced that it …
ytc_Ugwd-wi59…
G
The real safety issue isn’t just compute or policy — it’s training. We trained A…
ytc_UgzdklKFD…
G
Hey. I. Think it should be reasonable to include a link to all transcripts of c…
ytc_UgwmrGjyP…
G
I love when it when they wanna turn it into D&D outta nowhere and the ai just ro…
ytc_Ugwbvu2Ih…
G
is there a way to encourage AI to use a certain art piece that's purposefully sc…
ytc_UgzDU3SZH…
G
-i INVENTED AI - NO IDEA IT WOULD BE USED TO MAKE MONEY AND CONTROL PEOPLE AND C…
ytc_Ugx3L1gaw…
G
You've done it again! Very thought provoking. I somehow think it it would be dif…
ytc_Ugwfdj0f1…
G
What’s your source for how much energy an ai video takes??. Common sense tells …
ytc_UgynxEZcC…
Comment
We solved this issue decades ago with document copying technology.
Photo copiers, fax machines, printers, and so forth embed a code identifying the time the document was created and the ID # of the specific machine doing so. These are generally too small for humans to see, but if you know where to look you can check them to see if a document is a copy and get some basic information about it. Metadata on photographs and computer files is the same idea.
We should already HAVE legislation requiring all AI generated content to do this as well. Embed something in the image and/or sound file that identifies what AI created it, when, and the IP address of the user. It could be built in such that nobody would ever notice unless they knew how to extract the data, but also so that manually removing it would leave clear indications that the image / sound had been alterred. Then just make it a crime to create AI images / sound files without these identifiers and voila... problem solved. No more AI fakes in court. Online platforms could automatically detect and remove or reclassify them. Yet all legitimate uses would still be possbile.
It makes NO sense to me that congress, and the various states, have failed to take this simple and obvious step to curb AI abuse.
youtube
2026-01-23T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwjkJN0mitSfC4i50t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNlUGc38VjJTsWpFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVfhW-hOvPuoD3-3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYhgodBRXhNLFbOJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyHiqqYt2oF9sHsI8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyqKX1rr2j3KDaoJKF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWHs-cs_JiDe7v1f94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw2LKtB5PPzrqC8jjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyM57BGcCJCn_aHlBV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz51Qw3bJsH2Z20QNF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}]