Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI makes "terrible new viruses" now? we already had humans doing that and sellin…
ytc_Ugz3ZgFx-…
G
We are in competition with AI, however the real challenge is with our humanity. …
ytc_Ugwh1CJUe…
G
AI does not have any (ANY) reason or need to take over humans. NONE whatsoever.…
ytc_Ugx_rtIfV…
G
Mankind will fall for this AI consciousness nonsense, no doubt. AI will certainl…
ytc_Ugx3Jo465…
G
Window blinds opened by a string have been banned because toddlers very occasion…
ytc_UgzBzjAcE…
G
Ah. Hm. So what I'm hearing is that we need to make up a fake sparkling water …
ytc_Ugz14rTQp…
G
There will not be a perfect system only a better one. Accidents will happen inev…
ytc_Ugh2zj0x1…
G
Elon is the leader in AI tech and “The Boring Company” is boring tunnels all ove…
ytc_UgxZzoWhF…
Comment
@whitebread3872(THIS ISN'T AIMED AT YOU
It's just another comment that I made to someone who thinks that we should just embrace AI and learn about it as fast as possible
I hope this helps ya... Well, maybe it won't, but... Don't forget what I already told you. ( *>__<*)′ʃ♡ƪ
I'm going to bed, now
P.S.: I'm not usually the kind of person who argues. I really hate it.)
==================================================================================================================
Experts in the legal field need to look into this. We need the voices of professionals and other people like Karla Ortiz, who had their names come up in prompts, and take this to court or in congress hearings, even if AI companies have all the money they would need to get representatives advocating for them. A law firm could investigate on the matters around AI, maybe for publicity. There should be conversations between every individual, like what we are doing right now...
I would say that, first and foremost, there should be legal battles to obtain three requirements to have our work be trained on:
- Consent
- Compensation
- Credit
Then we could demand for OPT-IN and OPT-OUT features + other ways to authenticate beside traceable metadata that are open and practical + only public domain & creative commons-based datasets. (With non-consented copyrighted material in there, we run the risk of ruining our culture.)
To say that we should let it run its course and try to adapt to it...that's like saying that millions of people should just accept to have their rights and identity be disrespected and be disenfranchised in all sectors. We live in a *capitalist society,* and so those who argue that "artists of the present are just in it for money", well, _no! We do what we actually LOVE!._ This oppressive opposition is demanding that we do not have big and deep conversations about AI—"what is sentience?" "What does it mean in a capitalistic environment?" "Who should be the ones to reap benefits from this thing? Corporate entities or only individuals?" "What is even real?" "Are we going to be dictated by machines?" "How are those things feeding themselves?"
While at this point we may not stop it's progress, the ethical aspects AT LEAST must be considered, then maybe, MAYBE we could rebuild it from scratch, but we have to try.
You're not obligated to subscribe to a technological "inevitability". This is something that happens on human choices—you can choose to sit down and let it happen while being dismissive of what wrong it could do OR voice your concerns and debate over it. We are a community. We are not powerless against corporations, and I can feel many people are underestimating this fact. The more you choose to do nothing, the easier it will be for someone to exploit you... When we figured that cloning and asbestos were too dangerous, we abandoned them! If we are to use the technology for good, then our goals have to start with that. Right now, AI is like when you try to build a bomb, but then later decide that it should be used in farming, when at that point what you need is not a bomb, but something else! "It's open sourced"—SO WHAT?!
Everyone always has a choice... You need to think of who you are and what you stand for. Our culture is built on struggles that made the laws into what they are today. If this thing threatens us, we will gather and help each other.
We artists have rights to our IPs—it's our VOICES. It's inevitable that movements and unions will arise because of this. Artists who had their works taken without consent will be suing Stability and other corporations.
We're already used to being dehumanized in a consumption-based society, where we keep boiling down people to their works... Whenever we see something online, we may very likely fail to recognize that someone put a lot of time and love in making something...and if we continue down that path, we will just keep putting products above people, as they are now both disposable whenever their names come up in prompts.
*_We shouldn't be redefining ourselves to build more powerful systems; THE SYSTEMS should serve OUR hopes, desires and values._*
Do you believe that AI is something you will forever be able to use? Even if you think that it's really optimistic to think that an automated bot will create "good" & "meaningful" images, we DON'T KNOW what's going to happen—if you claim that you do, _you are LYING._ You are not an AI developer!
I even foresee that companies will pull the rug from under you and demand something along the lines of paying a monthly subscription fee with half the profit you make on your work as further compensation for using their models...
Instead of replacing low-skill jobs, the systems have been threatening to replace creative ones—voice actors, visual artists, musicians, writers... You fail to take into account the zeitgeist that might very well make this happen: once the allure of AI goes away, people might be less interested in art. It could affect the optics in the world around what goes into art, which has already changed the way people talk about it in news articles, where they say "the AIs make art" as opposed to "the AIs make horrible art". And I want to point out... When you say that a company who would rather buy an automated AI (I saw that comment) means that the person was just "very replaceable"... just the negativity in your attitude is... it speaks volumes about how you perceive other people.
I'm not saying that competition is a bad thing, but come on, man—have some respect!
The *companies* are the moral arbiters of your medium and must be held accountable. It's also not a matter of whether a machine can be taught to be ethical, because it's a MACHINE. They knew what they were doing when they released their products, that their users would be free to use it in unethical ways like they do, normalizing data exploitation until everyone is so binded to it and mislead into seeing it as only this amazing and beautiful revolutionary thing that they don't want to lose, sometimes even losing their minds when they read an ethics review on the technology. They even try to circumvent taking responsibility by telling their users how to make models of any particular individual, like Kim Jung Gi or Gregg Rutkowski. Take Midjourney, for example—they have a clause section in there that puts responsibility of copyright infringement on the users.
If we don't take companies like Stability accountable, things will only become worse until they can make profit over everyone's work, including yours. If you're fine with that and just see it as business as usual, that's fine. You do you.
youtube
Viral AI Reaction
2023-08-15T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxqqtiQyIKrdmS0ESR4AaABAg.9sWuxZqGhIQ9sv2zRJXgJS","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzcMtgxn8rVOvqWpxF4AaABAg.9sVOwu1T9x89w1IdB3TPJk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxMc4VjoPbpJqrUnjd4AaABAg.9sUrN0uidZm9sYxcBFjG8h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxMc4VjoPbpJqrUnjd4AaABAg.9sUrN0uidZm9sZiVsK6_we","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgweWq0Sq3JGO40YJKh4AaABAg.9sUd5qStwT49tN3Wc0f2Jt","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytr_UgweWq0Sq3JGO40YJKh4AaABAg.9sUd5qStwT49tPgaN116yu","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgweWq0Sq3JGO40YJKh4AaABAg.9sUd5qStwT49tRNQF6HJjx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwZ6GzbvxFo69zrKRF4AaABAg.9sTIAvYIUuD9sTYDvYXJpV","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyRpNXSXe5otEAQd6R4AaABAg.9sQNZUMvOUC9sQcra6cnSQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyTktNtr0N6RmuHLU14AaABAg.9sQEnp6fMwI9sYxxrNhlSz","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]