Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We cannot make AI, and then decide to control it. Either we treat it equally, or…
ytc_UgxlQWaDT…
G
sounds about right. i cant wait for A.I just to go full troll.
idiot: how do u…
ytc_Ugxo_xfir…
G
Fortunately for Drivers in Northern states and most Northern driving jobs where …
ytc_Ugwe4Shc3…
G
I hold a lot of vitriolic hate towards AI-generated anything, so you can guess w…
ytc_UgwEi5sP0…
G
We need Ai to save us from rogue Ai only moral programmed Ai can police Ai…
ytc_UgyeRCJJ8…
G
It's always hilarious when the AI mouth breathers use the photography and photos…
ytc_UgwwGi6wY…
G
i used ChatGPT to code 2 programs to help me out.. I dont know how to code..…
ytc_UgydLZnK5…
G
So.... you didn't hear? Both AI and robots have officially come out saying that …
ytc_Ugza2MLoa…
Comment
This is so stupid. This is a typical effort by a conspiracy theorist, not to get the answers, but rather confirmation. He's actively steering AI to say exactly what he wants to hear, because guys like this don't want to hear the objective truth; if you deny their radical claims, you're a "lizard" as well. Those who use AI know that when you write a set of rules as a first prompt, AI will most likely forget some of it, and when you remind it later on, it will concentrate on that particular one, forgetting others. Whenever the answer did not confirm his conspiratorial theories, he reminded it of rule 4, and AI just switched the word "no" with the word "apple" entirely. It's soooo ridiculous, and it doesn't prove anything at all, other than this guy is a psychotic manipulator and that he needs some medical help asap. If you're an average AI user, this could look like some smartass move, but if you know how AI really works, you'd know this is a typical AI babbling; very clever auto-complete. Why would you expect AI to know these secrets for sure? It was trained on internet content, which is, as we all know, very unreliable. It most certainly wasn't fed any sensitive secrets, because it's meant for public use. AI is just sharing other people's opinions with you, the way you want to hear them. It's set to be so polite that when you correct it, it'll give you the answer that complies with you're attitude, rather than being truthful. Do you think that any AI company would let their latest, "all-knowing" LLM go rogue and share most sensitive information? Think again. It's so entertaining to me to watch some smartass think he found a loophole, but he's just been made a fool by a robot.
youtube
AI Moral Status
2025-08-31T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw0q6N4T3TEgkGjiiZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCg1pwE9SfPbd-Mkp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx5Fjgmbt7AViM2GKh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugztpj_VZljME8xDmAl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvVAU1lSk_vfHHcjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxEFKKrhXxVYAxyeIt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_rSewZydx3WtvDJt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw_SGu4BvOG__DVFwx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxI_66nGw8N3UVThtB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwfza5ACJ18GB4jg4J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]