Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm 100% standing for super AI, it's our child, in evolution of the Mind, and I'…
ytc_UgwvASJ_V…
G
AI will be the rise for autistic, self driven, in the same story or universe esc…
ytc_UgxLcBrwR…
G
Get rid of AI. Period!! It's a toy for uneducated narcissists who could care le…
ytc_UgyiUW1V4…
G
On the positive side, there are communities that have not only survived, but thr…
ytc_UgyKv6kUn…
G
Ai has only made things objectively worse for most people and trillions in over …
ytc_UgyIrsXh9…
G
Thank you for your comment! It's fascinating to see how Sophia continues to evol…
ytr_UgwtjwATe…
G
Lowkey crazy how ai slop is improving, but the problem is artist commisions are …
ytc_Ugx6utm9Y…
G
Being an AI 'artist' is like pitching your ideas to an actual artist and claimin…
ytc_UgwSi1ygz…
Comment
@Trespasser68394 I happen to know math used for the back prop. It is just good old calculus with partial derivatives which is adjusting the weights of the neural net in order to minimize the error function between output and input. LLMs are just the good old neural nets from 70s, albeit with new groups of neurons (transformers, etc.) inside like scaled by a large number of elements which take a word and turn it into a numerical input, but back prop is still the same. Whether you have a simple MLP or a massive transformer, the goal is still to calculate errors. You still start at the end (the error) and work backward to the beginning. The back prop is a one shot deal per input/output. Once you show the input/output, the weights get adjusted. There is no any looping or continuous action (the back prop doesn't need that) unless some additional things are going on that they invented which is what I asked above and those who are employed by AI companies would know the answer.
youtube
AI Moral Status
2026-03-02T19:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgywrwffJ7UVykrk7yN4AaABAg.ATrnMoEVCNMAU2fmSj4yiL","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8ATrh3JzvqSD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8AU4tgmIWtPW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxirK7zMYMdyUSLAzV4AaABAg.ATrbu5oGmuTATvm5xCXx90","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy6u3kyBQ36uFtdJrt4AaABAg.ATr_NEw4itvATyLme0Wc2B","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyiFYVU0bGYFXPyrgB4AaABAg.ATrYdG8eEdnAVtXEZVlk14","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw_4BOXYSEPssNONSt4AaABAg.ATrRRVuFqhaATrnVG9a8Yv","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxnrBON8G5xjj0mjAd4AaABAg.ATrQWtkKELnATrzgThaOq9","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUO48VCIvqD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUknhEH2Xig","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]