Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@eduardouchoa307 I am not convinced yet. Its still missing too much data. A real AGI would have to be trained on all of what humans can experience and do. Right now LLMs only process text sequences and diffusion models only use image data. They are all very specialized on their class of tasks. A real AGI would have to be able to control an agent in space, use language, solve math equations, interpret patterns, and combine the knowledge about all those things into a single behaviour to achieve a goal. We don't have the tasks for AGI and we don't have a model that can handle all of that efficiently. Sure it might happen eventually, but I think LLMs right now are somewhat overrated. They are more like search machines for a training corpus. ChatGPT has been trained specifically to be an assistant so its data set includes regular dialogue, which is actually quite repetitive, and general language knowledge. Have enough parameters and data and it will make a pretty good estimate of new inputs. I really don't think that we have all it takes to train an AGI yet. I think we would either need tons of real world data or a simulation to train AGI in. And also, what do we want AGI to be doing exactly? What is the purpose of artificial life? Survive? Serve humans? A lot of open questions still. I feel like people are just humanizing these chat bots too much. They just play back their training data and fill the gaps.
youtube AI Governance 2023-05-03T00:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz-rB4ShicZWQ0oe_F4AaABAg.9pEEiWYG9gO9pGpkbyTvvS","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw0Y1gYx1-JwU482gh4AaABAg.9pE4KVUYlUG9pE6RwzzPED","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugw0Y1gYx1-JwU482gh4AaABAg.9pE4KVUYlUG9pEBf05twkc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw0Y1gYx1-JwU482gh4AaABAg.9pE4KVUYlUG9pF___dzvEK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwbB_xMD87-39w-lr94AaABAg.9pE-MUGFDDV9pExAhjVl4s","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyhVfDwLs-X0jddx6N4AaABAg.9pDyR6VbGDJ9pDzHm_ULfZ","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytr_UgzfjhL-NJs8RwfMKXJ4AaABAg.9pDriHPsyo19pDvMzeTSsY","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzfjhL-NJs8RwfMKXJ4AaABAg.9pDriHPsyo19pDvZ_J2qmH","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwACK_4gNoCthCGzwl4AaABAg.9pDqUT_ddNz9pE_RA4u2I1","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgwACK_4gNoCthCGzwl4AaABAg.9pDqUT_ddNz9pFHqpUjMuj","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]