Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Ryujins_Bangsso what? If without humans there will be nothing and ai can't cr…
ytr_Ugwjd6_Pf…
G
why not just hire an actual artist to paint that AI art and then claim it for yo…
ytc_UgxhO5Tse…
G
Are you dumb? AI doesn't have goals. We make them. It's like saying the goals of…
ytr_UgzdjFU1J…
G
Yeah. I'm currently working in the second company that is going "AI first". The …
rdc_ofj4hv3
G
That’s what I’m saying. How are they suppose to remember the knowledge that they…
ytr_UgyBuFcN-…
G
The best advice I’ve read is “AI won’t take your job but the person who knows ho…
rdc_n5ibede
G
I feel like this could all be solved with we instead had more ai assistants that…
ytc_Ugx5_qFHj…
G
One malfunction and he will shoot the man.. this shows how dangerous AI could be…
ytc_UgypPze1L…
Comment
I'm an AI/ML engineer and researcher, however I do not share the "big AI" sort of "corporate mindset" and I reject the entire absurd notion of "replacing humans". Personally, I enjoy working on small and localized AI models and applications for solving more practical problems and making tools for human artists to reduce the friction and tedious/boring labor of working with traditional tools and workflows. I'm also a 3D/technical artist and game/simulation and engine developer, so I like applying small intelligent algorithms to the things that suck about my own job to boost creativity and augment our capabilities whilst removing friction, stress, boredom, burnout, depression, etc from the equation. It's also important to me from a physical standpoint, because of my hand injury and reconstructive surgery years ago that causes me severe pain from prolonged usage of a mouse, keyboard or even holding a phone/tablet. I want to work with my brain/thoughts as much as possible, rather than destroy my hands further ... Anyway, I felt like that bit of background was necessary to clarify that I'm not one of those corporate suck-ups who wants to eliminate humans, and my philosophy and plans concerning AI are 99.5% non-evil, haha. :D
You said "if AI people aren't stealing artwork then they shouldn't be worried about this" ... correct, as long as you're not intentionally and proactively deceiving people to use poisoned data they weren't otherwise going to touch or injecting into "fair use" datasets. Make sure you put a clear copyright notice or some kind of warning along with it, as well, to both legally protect yourself _and_ clearly flag them for the unethical or illegal use. I know you weren't advocating it, but I'll say it for other readers that "poisoning" public fair-use data would be _highly unlikely_ to harm the large corporations but could have detrimental impacts on small/individual developers who are making local AI models and tools for their own job or use-cases. It might even be illegal in certain jurisdictions/countries, depending on laws and the way courts interpret it. So make sure you do this in a fair/safe way and not to cause indiscriminate harm to random people who didn't do anything wrong. It would be devastating, for example, if someone spent a lot of their time, effort and money on a model to help people with physical or mental disabilities, using fairly and legally acquired data, but their entire effort exploded in their faces due to random internet malice. Not all AI and AI developers/users are evil, even if the good get overshadowed by the bad in the media.
youtube
Viral AI Reaction
2025-01-11T00:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwoWrvqc4bwkUxDC6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHv7U2nxfL4rmEwYh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIbuOWcJqs7_eMqhR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXk46ptsJEjM7sOw94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-X4zoTLqhmrKDDoR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwsnfr6JMfN9ys6c2V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgztaR3xI4FB_CSVnLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOv-xUTWh8mtxKArt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWKlFuMlgGVhhWJsh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwcs_licQ-AYiPYEzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]