Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good video, I just wanted to add a few things In machine learning, there is this concept called overfitting. When you have a neural network you need to give it a lot of data, to make it 'learn' how to solve a problem you want it to solve. Issue is training itself doesn't guarantee that network will be able to do this, so you need to experiment a lot with network size and architecture. Now the thing is, when the network is small it can't do many things so it's 'forced' to learn things correctly, this is called generalization, meaning network can solve the problem it's supposed to in every context. When network is larger though, it can do more things and do them better, but it also tends to memorize the training data because that's how you maximize the fitness function (this is what overfitting is). So what happens is you make a large network and it's almost guaranteed that it memorizes a whole bunch of things instead of actually learn how to do them Now pretty much all the famous neural networks like chatgpt, gemini, stable diffusion, midjourney, etc are very large. The smallest language models that compete with chatgpt like llama are around 7GB, and chatgpt is at least several times larger. Just for reference an average book is 375,000 characters long (this is 0.375 MB), this means that in something like smallest version of llama you could fit over 18,000 books uncompressed. And I had experiences where language models provided exact same programming code that I saw elsewhere while researching how to solve some problems Even if this wasn't the case though, it would be bad for companies to just take other people's data without consent, but this makes it worse. AI companies will parade around all the things their machine learning programs can do when in reality it's all based on stolen data
youtube 2024-07-24T04:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyqKMz9oPhSFtECDfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIr5cYq15qU72VH3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgytoFScWzwXjLDuJmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxa67id9zox-5MgDo14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyV3DSnmo9ZkdXfJwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7rv6-OhNKurJ-a2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1XBomW9wrBMMoVHl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzacHJA1Ct101s9sGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyfSigXgasfGkL6jap4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgybydY-xSB8oCaVoTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]