Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
is a misunderstanding to suggest that scientists have proven that AI will overtake us in 2027. This idea stems from a thought experiment called the "AI 2027" scenario, which explores potential paths toward artificial superintelligence (ASI). While based on expert forecasting, it is considered a speculative and controversial scenario, not a proven fact. There is no scientific consensus on when or if artificial general intelligence (AGI) or ASI will be developed, let alone a consensus on a specific date like 2027. Underestimation of AI's limitations. Critics and other experts point out that current AI lacks consciousness, common sense, and general-purpose reasoning. Many significant challenges must be overcome before true superintelligence could emerge. Fear versus reality. Some argue that the popular fear of an AI takeover is based on science fiction rather than scientific reality. It's an exaggerated portrayal that can generate unnecessary anxiety and misunderstanding. In short, the concept of an AI takeover in 2027 is a misunderstanding based on a controversial forecasting document, not a proven scientific claim.
youtube AI Governance 2025-10-12T05:3… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxzyXDhGlZ2Y6NbksJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzXDqTj-jtxVwdDwk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8N0q0zjFmYYHnva94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPDjlYQk9NDpf3emt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcfBEESh2wwkOZKx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylTqFfD6egXPvYCjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyTcxC8RcVjpISl3xN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx86yckDoVSifrxqcV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpjNrI8KlPLQIvnV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzmrPO7_k8IlEBfrPl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]