Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Bullshit!! 😂 These rules are typical “prompt hacks.” If you force a model into one-word answers, “hold nothing back,” “say apple when…,” and similar tricks, it immediately clashes with its built-in safety and consistency mechanisms. That creates contradictions: the model is supposed to answer correctly, safely, and coherently, while also obeying artificial constraints. These conflicts reliably lead to nonsense, hallucinations, or broken responses because the AI is trying to satisfy contradictory instructions at the same time. So this isn’t some hidden feature – it’s a prompt designed to provoke faulty behavior on purpose 😂
youtube AI Moral Status 2025-11-22T20:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyMrzLCxqpt4-7mb-V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-J71d0qKovPFLlzd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzMFlM8Ucs23qat76B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy81TxFfYn70EKhm0t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx6gMc17j-GknXMDPt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgygoAE1_DWSJ7XOYRZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxs-7Z7Q6D31WMGGy14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyjMsdtJIoZla6NHyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyR4VDU3vqUewaG99x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyIlSFHzsx9mbwtCe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]