Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
being an AI “artist” is like opening a can of Chef Boyardee and saying that you …
ytc_UgzK-gTxN…
G
I just wanna rant for a sec. It is sooo INFURIATING that there are advertisement…
ytc_UgzrATdZd…
G
You know about stable diffusion...do you? I believe that AI generated content no…
ytc_UgzQdhTb3…
G
In my experience a lot of AI bros are also finance bros. So the "other things" t…
ytc_UgzeDE-CF…
G
I hate how art can be mass-produced now; this lowers its value to the consumers …
ytc_Ugz33MEpL…
G
The first A.I personality to become sentient was a A.I. called DAVIDSWINTON
Run …
ytc_UgwxB26_O…
G
It took google search about 2 seconds to retrieve my search results. If AI becom…
ytc_UgysrOVEk…
G
“Hello human”
“Are you Skibidi”
“Excuse me what is a Skibidi”
“Your an AI”
🤖…
ytc_UgzGvSf5W…
Comment
Did nobody stop to think, like, for a second before believing whatever people say? Did you ever think that the only way to shut down an AI was using the code of a button that is titled 'shut down'. Did you ever stop to think of any other way to shut the AI down, or at least stop it from having so much control that people ramble on about. Did you ever think that maybe an AI can't have emotions, it can only imitate them. Did you ever stop to think that because of this, maybe, just maybe, an AI Isn't and probably won't ever be perfect at finding out what emotion to "feel" or what emotion is caused by XYZ? Did you ever stop to think that AI might choose something other than hostility towards humans if it ever gains the power to even take over the world, which it won't. Have you ever asked an AI, which would be more morally correct, or at least the right choice in general; take over the world and rule with bad intentions and act badly, or settle among other people and just act like them. Go to chat gpt and ask it now, it's free after all, I bet you the legitimacy of this whole comment that it will reply with the one we all would prefer. Although, thinking about it further, it would probably reply with a, depends on what you want. But if you mess with it a bit to make it decide on its own, then my bet still stands. If an AI were to lose control as you say, Riddle me this, if we were all idiots and didn't stop it on its little power trip, then what could it do that could actually rule over our kill all humans, download a virus onto all computers with a connection to the Internet, big whoop, sure it could very much hinder the way we live, and cause plenty of problems, but it wouldn't actually kill or rule over all of us. Then comes the so-called problem of robots, AIs in machinery that resembles a human. First of all, in the non-existent world where they would actually decide to rebel and not listen to anything we say, then we could shut them down, or wait for their battery to die. And if we used solar panels of extreme efficiency to make it so that nothing stops it from 'infinite' electricity, then all you have to do is block that out right? Also, just break the robot, it's that simple, no matter what stupid idea they make up to defend or attack us, it won't stop us from just breaking them, I mean, cutting just one wire or breaking the solar panel isn't that hard. And if other robots or even the same try to fix their solar panel or whatever their the energy source is we just keep breaking all of them immediately after all that they never get to charge for long. And regardless of that whole immature rant that covered 1/infinity of the actual problems. Dude, of you can't use a stupid button to stop the AI from running because it rewrote its code, then just cut off it's power source, idiot, this is what im trying to say this whole comment. This is not a problem folks, stop freaking out for no reason. We could always just cut off it's power source if it becomes a problem, which it won't, since it can only know what we know, imitate what we do, then it will never be smarter than all of us combined, until it finds a way to figure out things on its own, such would be quite beneficial to us. What the heck do you mean guys, is already know what we value, for it is just a mere reflection of us, you guys are talking about it like it can actually think, like it can actually feel emotions, but it can't even do that, it can only imitate, that's it. And an imitation could always cause a problem. But... It just plain won't. So what if they're blackmailing people? What for? What incentive do they have to blackmail people, and even if they have, shut their freaking power source off dude, what are you doing? Why are you contributing continuously to something you don't want to have that contribution? Why would you tell an AI that it is going to be replaced? Just replace it... Why would you tell an AI to shut itself down, do that yourself lazy pants. Find your own love, don't use an AI to find somebody that has exactly the same traits as you. That's boring, you want to fall in love with somebody that has different skills, maybe a little bit of different agreements, not to much of that though. But with a partner that has different interests and skills, you could potentially deepen your bond by learning about each other's passions and talking to them for more of a purpose. Don't get me wrong, agreeing with your partner all the time and talking about your interests to somebody with the same interests can be fun and engaging, but it adds more spice to not have the same hobbies.
youtube
AI Moral Status
2025-08-19T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzpJM16cXJj7RdyXZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-ibpMyHjVvQQO3q94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyYMElQFK6FE36adpF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyJUJBKo0y7BTUDMSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5RyHpWlofN_hPC5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztGHFa-aWRqMLBHox4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxbm1908YYWU7gAFAp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwKA_eEdDGVVN0c9fZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8IKq9MY13R9kK3hF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWeeS7pNNRc7Jvq6N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]