Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the people making and pushing AI are so smart that they looped back around and got stupid. LLMs in their current state are glorified autocomplete. the only reason they are even as useful as they are is because software companies have been shitting where they eat for decades, and the internet is practically unusable. you need to throw trillions of dollars at what is effectively the problem of 'search' just to be able to dig yourselves out of the hole you dug yourselves into? why do these companies need to steal books/datasets from places like annas archive to train their models? why is information still paywalled behind academic journals (shoutouts to Aaron Swartz)? LLMs as we know them WOULD NOT EXIST if not for stackoverflow and reddit and IP theft. putting all that aside and just addressing the claims made in the video, unless there is a gigantic paradigm shift in model design or hardware capability, AI will NEVER REPLACE HUMANS. a human brain uses on the order of dozens of watts and we produced Archimedes, Euler, and von Neumann. please explain to me how AI will reproduce the work of Ramanujan, when even Ramanujan had no idea how he did the things he did and claimed he received visions from his 'family goddess' Namagiri Thayar. and then you have people like Penrose who make a pretty convincing case that consciousness is a quantum process; if that were true, then LLMs as they are now will literally never, ever be able to reproduce a human-brain-level of complexity. the reason these guys keep talking about how scary AI is, is because low-level programmers (i.e. programmers without agency) and company executives are the most likely to be replaced, because they dont really do anything creative. also, if you are trying to get money from rich old people (aka investors), scaring them into thinking they are in an arms race of existential consequence does seem like a very effective strategy. so yeah, AI/LLMs/agents may be useful, but just look at this 'practical example' of the thing eric schmidt talks about in the video: https://proofofcorn.com/ the guy is trying to get an AI to do end-to-end corn growing. what he DOESNT MENTION on his page is that THERE IS A HUMAN DRIVING. the AI is NOT DOING ANYTHING AT ALL BY ITSELF EXCEPT SEARCHING FOR THINGS AND DRAFTING EMAIL THAT THE HUMAN HAS TO APPROVE. and tbh i doubt it even succeeds, and certainly will not be as efficient or cost effective as a human just doing all that shit themselves. second example, on how AI does math: https://daveshap.substack.com/p/how-good-is-ai-at-math-really-anti "In 2021, DeepMind researchers used AI to discover new relationships between knot invariants that human mathematicians had missed despite years of searching. The AI didn’t prove the theorem. Humans did that afterward. But the AI saw what humans couldn’t see, and that seeing changed everything." so yeah, AI is basically just 'better search', but for every domain. 'AI' cant suck my ass yet, but one day it might be able to do so. but it wont ever convince me it is actually thinking, or that it loves me. meanwhile we are burning down the world to feed the saviour machine
youtube 2026-01-28T21:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyksq3GPTt2sDfz2FR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQSvCbIYdXQ9MitwR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytvBWXDDSo74_xhjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzTynkYAQgL1zb-3gp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZNE0KJ0TtHzQ8VMd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwsR45FNd0zXKKIJ8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwOMpkMJQ5wGkx666B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyu_qjp1dyXFe6QCu14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxs8xRSI3Qy3DCCFZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgznSU52Nd1x-3S9pNh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"} ]