Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is one thing that has not been mentioned: The current LLM''s (like Gemini or ChatGPT) don't "think", but rather they are very good word generators with an inhumanly good text understanding. If you have a standard problem, where the solution is well-known from the training data, like e.g. high-school math homework, it will correctly solve it and explain in detail for you. If you have a difficult problem that is NOT solved yet, it will either hallucinate some nonsense (that looks good on first sight to the human eye), or repeat previous failed human attempts from the training data. For example, it can explain me my 4th class math homework in the style of the king Charles bible, but it will never deliver a working proof of the, let's say, Riemann hypothesis (i.e. an unsolved problem). What does that mean in the context of this video: Ordinary work that can easily be automatized (filling excel sheets, call centers, production sites, developing the next dating app in a co-working place in Berlin, etc.) can and WILL be outsourced to AI, whereas jobs that require critical problem solving skills will continue existing. When you previously hired a team of software developers that are good at coding, now you need ONE excellent developer that is good at coding AND AI. (In my humble opinion, technologies like GitHub Copilot will stay a copilot for a while ...) That means for most people(e.g. people with no PhD in STEM), there will be more physical jobs (plumbers, soldiers, etc.), whereas comfortable co-working places and bureaus with air conditioning will stop existing soon. If AI's are able to solve open problems (scientists and the industry are working hard not that feature) we are doomed, because eventually there will be knowledge that humans can not understand anymore, which is a terrifying thought.
youtube Viral AI Reaction 2025-12-06T11:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyk-Rfo6K5wD5aKzAl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzs63wd7V_2bgcKLTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydieOWGV_WCx31IHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx018TCEEfjp-NXZod4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwv9trBb36YIXmOi-x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0ycZGUI1FUFSuc7V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQN1jWrwswXZf2I3p4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw5JPF9SJmvZPp4SrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3e4yG-78J-fP0hEV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwChwalvfXyeWLA4dZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]