Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Haha, the thing about llamas got me. Even if--just for sake of argument--an AI was actually accomplishing something useful, the small handful companies who have concentrated AI technologies into their hands expect to make their extreme investments back several times over, and that means on the backs of their customers. That is, us. These models are contributing to the burning of our planet and sucking down non-renewable, vital resources and the best they can show off is the use of them to solve trivial problems. The use of them (ai tools trained on supercomputers) for any but the most mathematically difficult problems where the models are proven to be useful is deeply unethical. There is *nothing* cheap about the use of AI. Additionally, in what world do you see the tech bros in control of AI giving away their use to disabled people or poor kids from [insert poor place here] for any reason other than a publicity stunt? What world are you living in? Any company selling you on brand new "free" features is just trying to hook you into an ecosystem you will one day be forced to pay an escalating price for. The *survival* of these con artists depends on hooking enough people onto tools they can't bear to live without and jacking up the price so they can finally, one day, turn a profit. AI is just the latest version of cigarettes or opioids wrapped up in a pyramid scheme. Better to never start in the first place.
youtube Viral AI Reaction 2025-04-27T05:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzWqCWqIzRvBW1DdN14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlZH3-V8Y7eHo19-p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4ZDPwftSij9YAgrJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"disapproval"}, {"id":"ytc_Ugzvh2pQIm3H3ciwnwt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwiMQ36HDLmkUKjKEl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWCp8zWBE-_j_6mlN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgznnIOCLAzLzcFfaSF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_TymaVMnVYJHhAjZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzgK6ciMkuC_1s2RUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxicdwrHLkX3vaI3K94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]