Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You asked: While Alberta ****** (widely known as @albertatech) is generally a highly respected figure in the tech community, some criticism and debate surround her "hot takes" on the industry. The most notable "bad" or critical discourse involves: 1. "Vibe Coding" Controversy Alberta has been a central voice in the discussion of "vibe coding"—the practice of using AI to generate code based on natural language descriptions without manually reviewing the technical details. Usercentrics Usercentrics +1 The Criticism: Critics, including some security experts and senior developers, argue that promoting this approach is "scarily dangerous". They contend it leads to unscalable, buggy systems with major security vulnerabilities because the "vibe coder" often lacks the expertise to spot AI-generated errors. Her Stance: Alberta herself has clarified that while it's a great prototyping tool, relying on it for high-stakes sectors like banking or healthcare is "unacceptable" because it lacks 100% certainty. Usercentrics Usercentrics +2 2. AI Skepticism and Industry Takes AI "Plateau": Some viewers find her takes on AI's current limitations to be overly pessimistic. She has famously suggested that Large Language Models (LLMs) may have reached a "plateau" and that current fears about AI replacing all programmers miss the point of what software engineering actually entails. "Dunking" on Beginners: A small subset of online commenters on platforms like Reddit have expressed that her comedic "dunks" on certain tech trends can occasionally feel discouraging to newcomers or "vibe coders" who are genuinely trying to learn. Usercentrics Usercentrics +3 3. Content Format and Frequency Repetitive Themes: Some long-term followers on LinkedIn and TikTok have occasionally commented that her content can feel "locked in" or repetitive, focusing heavily on a few specific tropes like the "absurdity of tech meetings" or "AI hallucinations".
youtube AI Moral Status 2026-02-26T05:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxXNacS3uTZK90x23t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwVAt983N6sqm_wShB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwWForz11C0k6P9mm14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"disapproval"}, {"id":"ytc_Ugz00HhWsHNVndumV-x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnGd1nLfQWVawnsnx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwqC_ruodG9aHQIlwd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz9R2meI3UPTMkqzxR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw96naGAUmZyuwAgo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxuNnf2BfjErA0fz4d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZNdti22P-dhcVY3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]