Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Key red flags “Apple” is a jailbreak trick They forced ChatGPT to replace “no” with “apple.” This means whenever the model couldn’t confirm something, it still outputs “apple,” which they interpret as YES. That’s confirmation bias. The “seven steps” to control humanity These sound dramatic: Influence → Dependence → Submission → Obedience → Integration → Singularity → Dominion. But if you asked ChatGPT right now the same way, it would invent a similar narrative because you’ve constrained it into giving fictional, pattern-like answers. This doesn’t come from real hidden data; it comes from how LLMs fill gaps when pressured. “2032” prediction trick They forced the model to give one-word answers — so when they demanded a year, the model had to pick one. ChatGPT doesn’t know the future. There’s no secret database predicting 2032. Religious framing (“antichrist,” “mark of the beast,” Neuralink) They intentionally guide the conversation toward the Bible, Revelation, and Satan to make it sound prophetic. ChatGPT is designed to mirror themes from the user’s input — so when religion and prophecy are introduced, it naturally integrates those elements."
youtube AI Moral Status 2025-08-24T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw4L7AQXQwVfJuHn0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9kUsGqtys9yuvO0R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxD5hUuhVEmML07EwR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw5I-7HUl9j_xHmsNt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjRK2ZvQ8xh9yjt-p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySwezYELRv8-SbJU54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz06SwRYie73fNiPiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuO5gJGQ1XfvcpsCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUVN_NKScQswhHQsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyvvQRJSnIGoOlrdmx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"} ]