Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Definitely agree. In the past, "AI" was a dangerous term to use when marketing your product, because most people associate the term with science fiction media and become skeptical- it being too good to be true or not meeting expectations. Now that we have convincing enough NLP and people can interact with these systems easily, AI is no longer "too good to be true" or failing to meet expectations (to an extent), but still evoking sci-fi imagery for most people. Obviously the broad research field of AI is very real, but "AI" as an entity outside of any specific context only exists within the minds of people. I even argue that more specific terms (e.g. LLMs) are starting to refer to this ephemeral concept of AI rather than the method/technology itself. Other than the issue of using a hammer as a screwdriver, and ASI if you believe that's possible, I think far too much critique of AI (whether for or against) falls into this trap where an "AI" is the cause for concern and not the system itself, ending up repackaging existing issues. The language or computer vision models aren't the reason workforce automation or autonomous weapons are controversial. Likewise, some of the scariest things adjacent to AI are invisible until it starts visibly impacting people. Mass surveillance, workplace/academic monitoring, personalised pricing, manipulation- these things may become increasingly normalised but ought to be discussed (and cautioned against) for what they are, not obscured through a false lens of familiarity because it could be "AI". Maybe it won't matter, but it is just weird to me to see this huge focus on AI as if it is anything different to the rest of the system. I don't need to be cautioned against Python or C++ because it has been used for evil, or hyped up because it has been used for good.
youtube AI Moral Status 2025-11-01T07:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwSAEAsiRw5R_WOT-x4AaABAg.AOx7cAwUPknAP8a0Z56xsV","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxvA05Dofqg6BN6yrx4AaABAg.AOx72P8RWfQAOzw_J-R3Gg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugz63iM0nEp0kNBGq394AaABAg.AOx3lWrfMggAOxAr1ICdgS","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugz63iM0nEp0kNBGq394AaABAg.AOx3lWrfMggAOxFoyRVEmE","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugw3cpgd_znRR5DCQ5h4AaABAg.AOx2z26DBfiAOxLcBRRF3x","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwhN7AlDS6bIJ4PAGh4AaABAg.AOx2mxjkatoAOx2wJuTJ38","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgyrrY5B1-Vdt8F-sfV4AaABAg.AOx-NFMBIx9AOxJoKVWU60","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyrrY5B1-Vdt8F-sfV4AaABAg.AOx-NFMBIx9AOxZjZi73J0","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxtjMfkynSd-a6jhNR4AaABAg.AOwzJgwRvf2AOye4qCao6i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxtjMfkynSd-a6jhNR4AaABAg.AOwzJgwRvf2AOyqDROFyLu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]