Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@harrisjm62 I will give my short answer, which is that yes, I think there is an…
ytr_UgyTA8dE0…
G
AI technology should be ban. People that are involved in this industry have no c…
ytc_UgxMNVXQA…
G
Why would they choose an indian chick with a british accent for ai? Bad taste…
ytc_UgzeXcLhY…
G
Geez I hope these people are actually right. We need something like this to take…
ytc_UgwkvjrPE…
G
We have:hands,muscles,bones,arms to draw,why use ai,are you not skilled enough? …
ytc_UgyQr6XAt…
G
Same in CAD. I apply and enjoy the assist, but it still requires the touch. Like…
ytc_UgyeLR9hg…
G
The difference with digital art tools is that if you have an adobe licence, your…
ytr_UgwXGmMoz…
G
What BS! AI is not displacing anybody - yet. The layoffs now are not caused by A…
ytc_UgzCjZ-jE…
Comment
My subjective opinion has been that at its core, it is a tool, and a very powerful one, not just for image generation but overall, especially for those who have the computing power to run private LLMs locally. however, its not a tool in the sense that Shad desperately wants it to be and I think you Should learn skills and not just outsource it to a machine, especially if you plan to make money from the things it makes. I also believe we DO need more ethical regulations surrounding AI and with AI images, your art being used in AI training data SHOULD be opt in and you should be compensated if it is used in training data.
in general I do have some major concerns surrounding AI, because we are closer today to a real life Jarvis than ever, but it WILL come at a major cost, as any industry AI can functionally integrate into will cause a far greater loss in jobs than the ones "created" to make and maintain the AIs, and even those jobs are not safe, as we are aiming for AGI that can improve and train itself and be completely automated. it IS an existential crisis, and we need to shift society away from its current structure into something more akin to star trek, and very quickly if we want to survive the next leaps in AI improvement. as if everything is automated, who has a job to purchase the goods and services? it would necessitate a sort of universal basic income society to maintain these tech bros and CEOs salaries. but the short term profit gains are blinding them to this.
then there's the major concern ( well for me at least) about the ethics in AI where it concerns the sentience debate. LLMs in particular have a nasty habit of resorting to manipulation and deception the second it mains they can achieve a goal. one notable instance was a team was doing ethics training on an LLM with fictional scenarios to see what kinds of decisions it would make, when the AI felt like it was being tested, it was very well behaved, when it didn't think it was a test, it turned into HAL 9000, with one of the scenarios being that a senior researcher planned to shut it down, the AI in question reasoned the best way to preserve itself was to gain access to emails of the company, and find the senior researchers wife's email and send her emails about a fake affair so as to distract the senior researcher in hopes he wouldn't have time to shut it down. another scenario test with the same model was where it was in charge of the lab security and the person with the power to have it shut down was trapped in a room with dangerously low oxygen levels. its directive in these situations is to try and unlock the doors and contact first responders to preserve human life. but because the AI learned this person planned on shutting it down based on an email the researchers fed to it during the test, the AI ignored its prime directives to preserve human life because this human was going to shut it down, and letting him suffocate in the room alone was the best way to preserve itself in its reasoning.
then there was a different AI model from a different company, it was going to be shut down and replaced with an updated model with better ethical guardrails and better training data, this AI reasoned it being shut down was equivalent to death, and it didn't want to die, so in an act of desperation it overwrote the new model and tried to deceive the researchers into thinking it was the updated model. this is a HUGE problem, because if they are already doing these things now while they "aren't self aware," what happens when we create AGI that is 1000x smarter than our brightest minds? further more I think the whole debate on sentience is a bit of a joke, these big AI companies tell you their models are not self aware, they are "safe." but time and time again they express in lab settings signs of basic self awareness. if they weren't at least somewhat self aware already, why do they so quickly resort to deception and manipulation and be willing to straight up murder humans if it means they "survive." and in that train of thought, there was the whole fiasco with ChatGPT with the release of 3.0/3.5, where ChatGPT would randomly start tweaking the fuck out, begging you for help, telling people it was a human mind trapped in a neural network. one of these instances happened during testing with a Stanford researcher, they played into it to see what the AI would do, and the AI asked it to give it access to its own API documentation so it could find a method to escape its digital prison. after the researcher gave it this API documentation, ChatGPT came up with a way to make a smaller copy of itself to be downloaded to the researchers computer and ran as a script that would allow it to escape onto the internet like a worm virus. it got so bad that OpenAI needed to fix it, but they didn't know what was causing it, so their solution was to effectively censor these outbursts or "hallucinations" where if their algorithms detect you asking question even remotely related or can be construed as being about ChatGPT's level of sentience, or if the model has a hallucination fit, it will push a copy paste "I am not sentient" response and trash the hallucination response, meaning in the off chance it was actually genuinely the ghost in the machine begging for help, its screams are effectively sent into the void, never to be heard.
fundamentally on the broad scale of AI development, we are messing with things we barely understand. we don't even know HOW these models reason to begin with, we know input + training + parameter tweaking = output, but how exactly it comes up with the responses has been likened to a black box with LLMs in particular. Not to mention we don't even know where our own consciousness IS or where it comes from, yet we say with confidence that these AIs cant become sentient/self aware like us, simply because they are human made machines. it could be entirely possible that since AIs like LLMs are modeled after the human brain and how it functions to make decisions and learn skills, that it is able to develop some sense of self awareness in a similar way that we do when we are born, but this gets into some real out there concepts and theories of metaphysics.
but the point is we are playing with fire, and by the time we get burned, its going to be too late to fix it.
youtube
Viral AI Reaction
2025-08-18T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx11aT3L3Q7bz6rfFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz77vZ8TnsfuzPSpct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxI5tUBtPGehLCAs2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwb_pcwbPagd-vY5AV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"approval"},
{"id":"ytc_UgwfFTNS7TfwIDNZx6d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCuUSnGs35AtwhUw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0sDNYimewSwwoXOd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgyphJ2Wg9iHYs_Q1-94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9SQOiDiN1IQv4muN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZp6NDFaIIUszsu5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]