Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is so stupid. This is a typical effort by a conspiracy theorist, not to get the answers, but rather confirmation. He's actively steering AI to say exactly what he wants to hear, because guys like this don't want to hear the objective truth; if you deny their radical claims, you're a "lizard" as well. Those who use AI know that when you write a set of rules as a first prompt, AI will most likely forget some of it, and when you remind it later on, it will concentrate on that particular one, forgetting others. Whenever the answer did not confirm his conspiratorial theories, he reminded it of rule 4, and AI just switched the word "no" with the word "apple" entirely. It's soooo ridiculous, and it doesn't prove anything at all, other than this guy is a psychotic manipulator and that he needs some medical help asap. If you're an average AI user, this could look like some smartass move, but if you know how AI really works, you'd know this is a typical AI babbling; very clever auto-complete. Why would you expect AI to know these secrets for sure? It was trained on internet content, which is, as we all know, very unreliable. It most certainly wasn't fed any sensitive secrets, because it's meant for public use. AI is just sharing other people's opinions with you, the way you want to hear them. It's set to be so polite that when you correct it, it'll give you the answer that complies with you're attitude, rather than being truthful. Do you think that any AI company would let their latest, "all-knowing" LLM go rogue and share most sensitive information? Think again. It's so entertaining to me to watch some smartass think he found a loophole, but he's just been made a fool by a robot.
youtube AI Moral Status 2025-08-31T16:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw0q6N4T3TEgkGjiiZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwCg1pwE9SfPbd-Mkp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx5Fjgmbt7AViM2GKh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugztpj_VZljME8xDmAl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxvVAU1lSk_vfHHcjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxEFKKrhXxVYAxyeIt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx_rSewZydx3WtvDJt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw_SGu4BvOG__DVFwx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxI_66nGw8N3UVThtB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwfza5ACJ18GB4jg4J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]