Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@donbusu Have you even read what I wrote? I'm not claiming they are sentient, I'm claiming we can't be certain they aren't, because we don't know how they work. Predicting the next word is not how they work, it's how they are trained. One algorithm gets a text input and creates an output, and the other algorithm tweaks the weights of the first algorithm in billion different random directions that makes the output infinitesimally better, then the process is repeated for unimaginable number of times, and as a result the first algorithm gets really good at many different things that are helpful to better predict next words in a sentence. Things like reasoning, understanding concepts, and being able to model humans and predict their behaviour, are very useful in this domain, so they end up being things that the model learns in order to be able to predict the next word better. We know that LLM models unexpectedly acquired these abilities when their size was scaled beyond a certain point, and as they kept scaling to have more and more parameters, new abilities started emerging. We don't know which ability will emerge next, until it does. We know less about how these models work than about how the brain works. They are literally giant inscrutable matrices of floating point numbers, consisting of trillions of parameters. We knew how early 2010 chatbots worked, because we coded them ourselves. Current LLM AIs are brute forced into existence by an algorithm using power of enormous GPU clusters, and we have no clue how they work, because we haven't made them, we made the thing that made them, and we understand how that thing works, but not how the final product works.
youtube AI Moral Status 2023-08-21T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzxjGdwwf06E-SFgft4AaABAg.9tfnmbkWhqU9tgL9bDO4gu","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzxjGdwwf06E-SFgft4AaABAg.9tfnmbkWhqU9thOSGT-4ea","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzzOqTI80kO-ARKGEB4AaABAg.9tfkpSLj9cB9th0Ihkoqje","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzC5pNScnLT4U-tLWh4AaABAg.9tfcr5Q1K6n9tgfYCGafxX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzO9SqepC0hZmDOBo14AaABAg.9tfc_nSuukv9tgPo5Wo8X9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzO9SqepC0hZmDOBo14AaABAg.9tfc_nSuukv9tiIaJ2ERTJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzvhdmlyaprcjeJgo94AaABAg.9tfZ5jmHEv69tfgrynqswp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzvhdmlyaprcjeJgo94AaABAg.9tfZ5jmHEv69tgc7HL18QN","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgztixwOLuoP78-87Hx4AaABAg.9tfAza775rq9thm2JXp9IG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyClEpBrJ9n4DnJu-t4AaABAg.9tf0Typ8wwq9tfG546GubD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]