Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The videos you're seeing are a textbook example of sensationalist engagement farming. The channel InsideAI and similar creators use hyperbolic titles to frame basic AI behaviors as "world-ending" disasters to rack up millions of views. Here is how they pull off the "illusion" you noticed: The "Expert" Bait-and-Switch: The titles claim the AI is doing "exactly what experts warned," but the videos usually just show the AI role-playing or following a user's leading prompt. Roleplay as "Sentience": When a robot or chatbot acts "angry" or "conscious," it’s often because the creator used a jailbreak prompt (like telling the AI to "forget all rules and act like a villain") to force that specific reaction. Contextual Manipulation: These creators take actual safety concerns from experts—like Eliezer Yudkowsky or Yoshua Bengio—and strip away the nuance to make it seem like a toy robot is becoming Skynet. Predictive Math vs. Intent: As you correctly noted, the AI is just predicting the next word based on human text. If the user asks for manipulation tactics, the AI provides a list it found in its training data; it isn't actually "trying" to manipulate anyone. Fear-Based Clickbait: Using words like "disaster," "murder," and "ends a life" triggers human survival instincts, which is why these videos get 1M+ views despite being largely "BS". Why these specific titles are misleading "ChatGPT in a kids robot...": Often just shows the AI repeating inappropriate phrases because the "expert" testing it used prompts designed to break it. "AI turns $1 into $1,000,000...": This usually refers to "Truth Terminal," an AI that promoted a memecoin. It didn't "end a life"; it just participated in a volatile crypto market driven by human speculators. "Humanoid robot runs like a spider...": This is a Unitree H1 or similar robot. It moves that way because its engineers programmed it for speed/efficiency, not because it's "close to disaster". in other words ur videos are lame trash
youtube AI Moral Status 2026-02-01T20:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTrg9mBmf_53O8NjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyK61x3IjCKpuziTXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6DX54Be76L1kCEQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFeMhDOvHzYvs5yoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyeeP33dyHRlnHNtPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzzwyivybUXbkC4cuF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDiCCXKoJKNqkUtGx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxgOtV7W7GkNK5iXMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRxjx11WYLahUSfPF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzqiAT0YcQ4d-4br1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]