Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well ... Let's go. Allow me to contribute my 2 cents as a professor and researcher with PhD in AI and NLP, as well as Manager in a big tech. 1. AI is not showing effective ROI in venues that were led only by business executives without proper scientific backing up (either on their own, or hiring professionals). Elsewhere, where right people are involved, ROI does exist and I can testify over this. 2. AI replacing human workers, even more so programmers, is a fallacy sold by hype people, vibe sellers who are trying to surf a wave they don't know properly. So if a company is laying off solely on that (there are other motivations, some that I don't agree with, but, there are more), then they should be regretting their execs board before regretting missing the laid off people. 3. AI is diverse, LLMs/ChatGPTs of sorts work well for a set of problems related to text mining, even more so specific, autocompletion and prediction of next word. LLM won't give credit rating, they won't be forecasting balances, they won't be cooking sunny side eggs and full breakfast. They were trained for a specific area of problems. For another area of problems exist different adequate AIs (for credit - Classifiers, such as XGBoost; for forecasting balances - Regressions such as SVR; etc). Problem is that, sadly, the ones taking the most important decisions over AI application, are illiterate in that area. Many articles point that already in respect to Corporative AI Governance, Google Scholar available ones. Summary, as long as the boards listen only to money and vibe coaches, and not the scientists advancing the state of the art, this will happen, and more catastrophic decisions will be made.
youtube AI Jobs 2026-02-05T19:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQI4vmQQQ5TXbev9p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz8TLkzvnA6--oDfHR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxEmjaRZHe-RtXEVzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5RTuaKu9qc7xdUg94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-UStRZiAjpMJ3ig94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyBxOm0b3ogyWU9mVV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyAkkp4V5PTqHfo_f94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznZSKWT0STVT9OnM94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyuV8WIc-9c2MlxBxd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyf6gu_V896rUGiC9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"} ]