Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My whole issue is that no one and I mean NO ONE has clear mathematical proof that we can reasonably delegate thinking to computers. We run them on Turing-machines (or at least the equivalent), which are of course not decidable (see halting problem) (and are also prone to being conditionally consistent and non-complete, which are all proven "holes" in math!!!!), thus by extension you can not give an arbitrary problem and expect the machine to solve it (which IS what we, humans do basically). And for the other way: when you have an infinite set of perceptions (which are the fundamental units of any practical deep learning structure), you can only solve a problem arbitrarily well with a sheet of INFINITE perceptrons (or equivalent structure), which is physically unfeasible AND even if you had infinite resources, you can not overcome the issue that this is only for a WELL-DEFINED problem, not ANY problem, nonwitholding the Turing halting (also, quantum computers have their equivalent Turing machines!). Of course, we can have good enough heuristics that many white-collared jobs can be negatively affected, but for the aforementioned reasons, it is hubris to think that we can completely outsource thinking to machines. Also, if somebody worked and understood AI/ML, they already knew that many tasks or even whole roles were eliminated from the human labor, it is only transformers that use natural language that seems so omnipotent.
youtube AI Jobs 2026-02-24T18:3… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgySkEnSxUA4hLz41hF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCf6lulGBXfBgdAyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhoeGG9FWyxdDa1Td4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuMLSL0R-3iPTwlZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzKCFukqDsEIhjDOMt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQrw-eGKZbVnLmwpJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFR3AVNls0KTgOtKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfZ4vOIMxG9Yp2HUt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyu-0yUAqCP6H3Dwnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3oiciQzRwGIceFmV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]