Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPTS response to this video: 🧠 What the video claims • The title and description suggest it’s about AI researchers and a dramatic narrative that there might be some kind of dangerous “monster” inside ChatGPT or large AI systems.  • Based on online chatter and how the title is framed, it appears to be sensationalized content — meaning it’s designed to draw attention or clicks by using extreme language about AI risks.  📌 Important context • Content like “monster inside ChatGPT” or “AI trying to avoid shutdown” are generally examples of exaggerated interpretations of AI research, not factual descriptions of how AI actually works. • ChatGPT and similar models are advanced pattern-predicting systems — they don’t have intentions, consciousness, self-preservation instincts, or hidden agendas. They generate responses based on data they were trained on, not inner motivations. • Some videos push dramatic narratives to spark debate, fear, or views — but this doesn’t make them accurate representations of AI technology. 🤖 AI reality vs dramatic titles Here’s how to interpret claims like this sensibly: ✅ AI models are tools: They predict text based on patterns, not “think” or “feel.” ✅ Safety and limits matter: AI research carefully studies risks, but not because the model is a conscious “monster.” ❌ AI doesn’t have internal desires or secret modules with their own goals. 🧩 My overall take The video’s title and framing seem more like clickbait or sensational content than a grounded technical or scientific analysis of AI. These kinds of videos often mix real concerns about AI ethics and safety with speculative or exaggerated language to get attention. If you’re trying to understand what AI actually is and how it works, it’s better to look at material from researchers or explainers that clearly separate hype from technical reality. If you are still reading this I don’t know who to trust and who not world is full of lies.
youtube AI Moral Status 2025-12-20T00:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzb8fseXBOWvMosRFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw4XsCBqfkLgb_GOPp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwhHbO3gdwxAgv293p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxV24L1AyQpa8sOMQV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWjiMrxGSvyKPn1-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1Gm2S_vb35T5b_f94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwMTDQMgTlAHfYuw954AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytRl29SONfildhC8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwfqqJMkK_fjLTO-6Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRZPU4iDDfAe-yxm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]