Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Each mark has a story of its own,but ai just poops out what every they eat…
ytc_UgwKMbb_u…
G
No. one. wants. AI. Nobody. Not one normal functioning human wants to live in th…
ytc_UgzP0oA5v…
G
I posit this, RIGHTS as we think of them came into being to protect a person's b…
ytc_UghA9z6zW…
G
iF AI really learns anything about Humanity it WILL want to harm certain individ…
ytc_UgzSDMTgj…
G
I'M a Chinese. This AI is stupid, it is just a "FREEKILLER" Chinese education is…
ytc_Ugza8IA-G…
G
So basicly these artist are using AI generated images as concept...
They are ope…
ytc_UgxsssMRB…
G
he went to twitter and ChatGPT for medical advice. i don't think he ever would…
ytr_Ugz2-8PyM…
G
I play the piano and do singalongs at senior living communities. Yes, AI could r…
ytc_UgzTB_VlN…
Comment
Well, ngl, as someone who developed an AI before (tensorflow, RMSprop example). Studied chemistry and has a chem technician. I would say... AI use basically yes, is transformative enough.
Here is an everyday person's question. Let's say that i give you, the word dog and the word cat. This is the simplest way i can explain btw so buckle up.
Let's say that in binary data we can write it as 0 or 1, like 000 means dog, 111 means cat. got it so far?
Now 010 would mean "dag" 101 would mean "cot" etc.
Now the thing is, imagine all of these being in a set, like 000, 001, 011, 111, 110, 100. 101, 010
Now imagine that apart from this, they all have a distributed value, there are 8 values so we start out at let's say 1/8 -> 12.5%
now on first run, you basically can get any answer, BUT you are satisfied with only one. Soo you decide to pick it.
Then the system adjusts the "chance value" of it so now it's like a +1 let's say (depends on the logics used, etc.)
Soo you instead go 1/9 BUT the good answer has 2 values. So it's basically now 11.10% for 7 answers, BUT for 1 other it's 22.20%
Now imagine adding to this data augmentation, complex math, heavier models, genetic AI model, agent based AI models, LIF neuron model and so on.
Basically if we were to punish AI learning eventually someone would map a human brain, call it algorithmic and basically say "it's illegal that you watched the movie, now it's copyrighted content is in your brain".
The main problem is that it's probabilistic, so it's NOT always going to do it back, etc. temperature in AI terms means how much it could even diverge.
So I think this lawsuit shouldn'T pass otherwise it would be pretty bad.
It's like as if i could sue someone for rolling a dice with words on it and they would spell out something copyrighted by PURE chance/probability unintentionally.
For example in the above "user friendly example" with enough people the value "000' and "111" could increase. have ties to them from previous nodes (layers) and basically be said "what is your favourite dog?" and instead leaning to "cot" etc. it would lean to either "cat" or "dog" because those are the most usually liked answers.
Then on top of that, there are open source models rn, that can without internet run on your pc, and yes, you yourself can train them too and punishing a publicly available model, etc. for smth a private one isn't would just increase the underground economy for "home-trained, custom scraped info" models.
youtube
AI Responsibility
2026-04-12T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxFfjQ_bk6scWXFq3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwCwAVg7FIOLaRPHPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzvNC_bDgCeb1ZimLJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzDabCvKs6eGMyGQCJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugyc7W2U37tynwMK2lR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugya6OcqBGv_LnpBq454AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxfiGQ05wfuv7fJKM54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzMtwZ9Axv4yJjqH8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxXy6azk6yA-nTPIs54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgwlGN-_V4WXYYx2-k14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]