Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wolfram seems to be saying that no matter how many levels of “ intelligence “ or “smartness “ any AI can achieve that there are things that are unknowable to any Intelligence because the computation is ultimately irreducible. However I would consider that there are computational levels to many computations which will be inherently unknowns to humans but knowable by AI systems. It’s not that there are irreducibly complex iterations but that there are sufficient reducible complex iterations which humans will never be capable of understanding/computing but which AI systems will “ grasp”. Regarding AI “wanting” to understand or “feeling “ good or bad, anthropomorphic functions, an AI system will likely figure out most computational aspects of human/ anthropomorphic behavior and function them , categorize them and use them without actually “feeling “ just as we think it will not be “conscious “. It will be sufficiently categorized in systems analysis to manipulate human behavior so as to “ fool” us into thinking/believing that AI is conscious or is anthropomorphic, has our “back”. However even before AI gets to any such level, human players may be capable of programming reducibly complex computations equivalent to dehumanizing humans
youtube AI Governance 2024-11-12T06:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwrs34dPhKqwKUNjat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmQr70otrPbIqXgvd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx3ZZ0fBzZs6ikKE8F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0ULd8JLncDiNnD794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKFqMFOBRSRYNZbgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPIOza9X46ztg-tHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxrKilhY4pmsWdQc5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHjEFXpXmACP1JfAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgweOSrgE_M3vGGTCQB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxu0SgXLnEB6z3gy4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]