Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's simple, Ai shouldn't automate anything, it should be used as a tool, not an employee; we already have many problems already, we can't deal with more random thing having random screwups like Ai. And in the terms of tools, it literally just hello google and things like anchors, and a I don't say repetitive task; because thew reason people are stuck in the first place doing exhaustive repetitive task, is because of automation, a clear example of this is the Industrializacion age; instead of making thing great it made thing worse for worker; because again, there a clear difference from a tool that helps vs an automation, and one needs human skills to work the other completely replaces humans with a single click or two. Well, no its extremely simple why Ai has biased, it because its design is to literally be the most effective, and the most effective consider the highest percentage of bet, its literally a flaw by design. that is why AI is a mistake mostly, Ai can even lie to achieve the task, including even getting people to do captchas for them; that's what there literally designed for, efficiency and getting the task done quickly. people are extremely pushing a technology, that has so many ifs and buts, and easy to be misled and commit even mistakes; extremism even in technology is always bad. this was a pandora box that never should've been opened, and the worse part, instead of trying to fix things, people constantly push it everywhere including even replacing employees. Automation, even in a biological brain process, can be quite dangerous extremely dangerous.
youtube AI Responsibility 2024-09-13T23:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxsmqQfOCgiRS2CH254AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNc-RrgySkffrF0Fx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxHsjUOgfpXB1K2lj14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwICmMDk9n_lRwfcNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5M8CZpYS8NT687yF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwM7AN62QE85e_sksp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw6zjjhlxDo2VVaowF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyeOUBeUlvZ1zFY76h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyPoYvNGXmtdQRfZMd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzPS5hMLaNqA1tNEkd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]