Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ REFS: [0:05:20] Ricardo's Law of Comparative Advantage from 'On the Principles of Political Economy and Taxation' (1817) - Referenced in context of explaining why economic trade theory doesn't guarantee peaceful AI-human coexistence (David Ricardo) https://www.econlib.org/library/Ricardo/ricP.html [0:08:05] Spearman's g factor theory from 1930s - Historical concept of general intelligence as a single measurable factor, proposed by Charles Spearman (Charles Spearman) https://psycnet.apa.org/record/2019-39185-002 [0:08:40] Computational irreducibility concept from A New Kind of Science - fundamental limitation in predicting system behavior without step-by-step simulation (Stephen Wolfram) https://www.wolframscience.com/nks/p737--computational-irreducibility/ [0:12:30] Raven's Progressive Matrices - Standard intelligence test used to measure abstract reasoning ability (John C. Raven) https://www.sciencedirect.com/science/article/pii/S0010028599907351 [0:15:55] Discussion of quantum-resistant cryptographic hash functions from NIST's post-quantum cryptography standardization (National Institute of Standards and Technology) https://www.nist.gov/news-events/news/2022/07/nist-announces-first-four-quantum-resistant-cryptographic-algorithms [0:20:25] Discussion of existential risk and meaning preservation connects to formal philosophical work on extinction ethics. Context: Argument about meaning of history and value preservation. Source: 'Essays Existential risk and human extinction: An intellectual history' (Thomas Moynihan) https://www.sciencedirect.com/science/article/abs/pii/S001632871930357X [0:20:35] Discussion references the K-Pg extinction event that led to dinosaur extinction and mammalian succession. Context: Used as analogy for potential AI succession of humans. Source: 'The rise of the mammals: Fossil discoveries combined with dating methods illuminate mammalian evolution after the end-Cretaceous mass extinction' (Philip Hunter) https://pmc.ncbi.nlm.nih.gov/articles/PMC7645244/ [0:24:30] A theory of consciousness from a theoretical computer science perspective - Academic paper examining consciousness through mathematical and computational frameworks, particularly relevant to discussion of consciousness and self-knowledge in AI systems (Lenore Blum) https://doi.org/10.1073/pnas.2115934119 [0:24:35] Discussion relates to the classical problem of other minds in philosophy, addressing how we can know about the consciousness of others (Anita Avramides) https://plato.stanford.edu/entries/other-minds/ [0:34:30] Discussion of quantum mechanical model of atoms and electron behavior in context of consciousness and computation. References standard quantum mechanical description of electron orbitals and quantum states. (Richard Feynman) https://www.feynmanlectures.caltech.edu/III_01.html [0:38:35] Highly accurate protein structure prediction with AlphaFold - Original Nature paper describing DeepMind's breakthrough AI system for protein structure prediction. In context of discussion about AI systems solving previously intractable scientific problems. (John Jumper et al.) https://doi.org/10.1038/s41586-021-03819-2 [0:41:10] Discussion of superintelligent AI as existential risk, referencing 'Artificial Intelligence as a Positive and Negative Factor in Global Risk', a seminal paper discussing AI safety concerns (Nick Bostrom) https://nickbostrom.com/existential/ai-risk.pdf [0:43:35] Whole Brain Emulation: A Roadmap - Technical report discussing requirements and challenges of brain emulation, including discussion of required scanning resolution and preservation of neural properties (Anders Sandberg and Nick Bostrom) https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf [0:44:50] Mind/Brain Identity Theory - Philosophical framework discussing relationship between mental states and brain states, relevant to Yudkowsky's functionalist perspective (J.J.C. Smart) https://plato.stanford.edu/entries/mind-identity/ [0:48:45] Discussion of personal identity and consciousness continuity relates to core concepts from Stanford Encyclopedia of Philosophy's entry on Personal Identity (Stanford Encyclopedia of Philosophy) https://plato.stanford.edu/entries/identity-personal/ [0:51:05] Discussion parallels Robert Nozick's Experience Machine thought experiment from 'Anarchy, State, and Utopia' (1974), which explores whether purely pleasurable simulated experiences constitute genuine happiness (Robert Nozick) https://rintintin.colorado.edu/~vancecd/phil3160/Nozick1.pdf [0:57:25] References his previous writing on no universally compelling arguments, discussing limits of rational persuasion (Eliezer Yudkowsky) https://www.lesswrong.com/posts/PtoQdG7E8MxYJrigu/no-universally-compelling-arguments [1:01:30] Discussion of mathematical axioms and commutativity property (x + y = y + x) references fundamental concepts in mathematical logic and axiom systems. This relates to Wolfram's work on fundamental mathematics and computation. (Stephen Wolfram) https://writings.stephenwolfram.com/2020/12/combinators-and-the-story-of-computation/ [1:01:50] The Simple Truth - A foundational essay on the nature of truth and epistemology (Eliezer Yudkowsky) https://www.lesswrong.com/posts/X3HpE8tMXz4m4w6Rz/the-simple-truth [1:01:55] Highly Advanced Epistemology 101 for Beginners - A sequence on epistemology, logic, and truth (Eliezer Yudkowsky) https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs [1:02:10] Peano axioms - The foundational axioms of arithmetic in mathematical logic (Giuseppe Peano) https://plato.stanford.edu/entries/peano-arithmetic/ [1:06:30] Discussion of first-order logic vs second-order logic in mathematical foundations. Context: Explaining how conclusions follow from axioms in different logical systems. (Jouko Väänänen) https://plato.stanford.edu/entries/logic-higher-order/ [1:08:20] The paperclip maximizer thought experiment, demonstrating how an AGI with seemingly innocuous goals could pose existential risks by pursuing objectives orthogonal to human values (Eliezer Yudkowsky) https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer
youtube AI Governance 2024-11-11T18:5… ♥ 29
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugzytbm32BmPyZWeuft4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBO-wKvI2gMWMQXm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyqKQ4Q2zAr2Pf3XpN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxd5GPgz0mc1vmWDml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz7du4ZZIu4g61tYPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFaHsdftdvaS601Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz5tlyVDY64cxGs0WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyK2doOVquGPsMQeW14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwimqcLDJLMkOtSFeR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJSuxSyyJkYu5zO7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})