Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am someone who would say that I am pro-AI. I do think you make great points, and I generally don't think we should have very much interest in AI art as a society, I just don't think it solves a problem. I am more interested in the aspect that back in the late 90s, computers filled. Computers were said to "Take peoples jobs" and "Be the end of humanity" but 20+ years later, and we still have jobs, and we are still here. My point is that AI is a tool in it's infancy much like computers in the 90's and it's not clear exactly what role it will fill right now. But in 20 years I am sure it will be a normal part of society and we will have to decide what to use the technology for. Imagine you need to check a 10-page essay for grammatical errors. This would be a long and tedious process. However, AI would do it in no time. Another closer to home thing for me. Imagine you need to code a program to sort an array of prices. This is easy enough, but it takes some time. I would much rather just have AI code this quickly and fix any minor errors it makes. To conclude, it is not that "AI is the future", AI will be useful, it is just not clear what the future holds for it and we need to decide what that it as a society. If you guys are interested, look up chinese room experiment. It is an interesting thought experiment about whether AI is intelligent or just a machine manipulating things. P.S. I agree, I think NFTs were stupid.
youtube Viral AI Reaction 2025-04-03T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwFsxKIeAiCntrIQJZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSsmHbHeV-3DofgZN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRtI8iT39HYLpVbd54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwYXd1U0J6WLF7aKfd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxSzDkJBjBGrect_m94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxI9U4XkS5KloAZ0Dd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgycPbJzikYFQ2xgv994AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwuiDsQtrDUAfOB-4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxwu_klGEgBtM2igGB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxb_OoCadPj_2JoXm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]