Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ki0w0-lordofwatersheep this is a massive oversimplification, but I'll try to explain. Firstly, I've never seen this issue personally, so this is in theory. AI is trained off a massive set (potentially billions) of images, with text describing things. This text is interpreted by the AI in the form of "tokens" (it doesn't actually know what the text means, that's an entirely different type of AI). The AI learns what each of these tokens represent, and if you bunch them together, it tries to calculate what exactly that would look like. A key piece of info is that the AI doesn't actually have the original images it was trained on. This is why an AI trained off of petabytes of data can fit in a single 10gb file. (For reference, assume the average image is 10mb, there's a thousand of those in a gigabyte, a thousand gigabytes in a terabyte, and if you take a thousand terabytes, you have a single petabyte.) So, theoretically speaking, if this is an issue, it's most likely because the dataset the AI was trained off of is stupidly small. Either that, or it could be that the AI is really badly configured, in such a way that it is allowed almost zero randomness.
youtube AI Responsibility 2023-08-01T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzOdZlwcQTYTIT-abx4AaABAg.9rDTGRRaUzX9rET1GghwEX","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwEiUuS2XwcrdQMyAJ4AaABAg.9rDKh4OaYYv9rt-bIFNk7m","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw60dbyCmi-jBtXhyJ4AaABAg.9rCh_zYjVkd9rETL3GLcR_","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxu_Tul03_CprbYNzh4AaABAg.9r1WVR8NYP59sslan_E0zC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxPuSqvNIXL5Rf3wsx4AaABAg.9r0u1s1tGEJ9t4bOBug_do","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytr_UgxPuSqvNIXL5Rf3wsx4AaABAg.9r0u1s1tGEJ9ugzW3PYfky","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgxPuSqvNIXL5Rf3wsx4AaABAg.9r0u1s1tGEJ9vcBV0ZrTLm","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgydklyZZC_WnlCBNPB4AaABAg.9r0O0OGONyi9suC7jWSxuC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgydklyZZC_WnlCBNPB4AaABAg.9r0O0OGONyi9t21nMY6iWv","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxd3QY59zx1zPC-vkR4AaABAg.9r-NqyBmikA9ruIug-wTdO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]