Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, theres a "leniency" built into almost all image recog, where the AI dete…
rdc_fcst7ck
G
im here after seeing the robot kick a guy in the nuts with the motion capture su…
ytc_Ugwq23YqP…
G
Google AI told me last night that there's a *fourth* "Honey, I Shrunk the Kids" …
ytc_UgwUUeKF9…
G
3:10 - I don't get this graph. Because in our country there is e.g. one "self dr…
ytc_Ugwk_FsCf…
G
My friend is literally the same way. She only has one eye, and various other med…
ytr_UgyAa6CfN…
G
As a 20 year old who’s more familiar with AI. It’s so easy for the mom and paren…
ytc_Ugxun81mU…
G
But when i copy and paste same questions and scenarions from this video to chatg…
ytc_Ugy1BVRDn…
G
So basically OpenAI's response is you can't make an omelette without breaking a …
ytc_UgyMaFP9k…
Comment
Sure. B is just billion (parameters).
That's just the number of learnable parameters. I'm no expert, mind you, but well from what I've read and learned, it's related to the possible connections it can do between "words" (tokens). It's related to how the neural network does mathematics stuff to transform your input into a prediction of what the most likely word to follow is (basically it calculates probabilities, so that you can have some variance in the replies by choosing one of the most possible tokens. If it was fully deterministic you'd always get the same answer to the same input).
So basically it helps the model determine, given an input (question) what word would follow to that that makes sense. So in theory, the more parameters it learns, the more connections it can make. This is why chatgpt can give such good answers, because it's able to connect your input very well with an appropriate answer. So to speak, it's able to understand better the context, the implications, the nuances, etc. (Strictly speaking, the model has no idea what it's doing, it's just predicting text!!).
The fewer parameters it has, the poorer the text prediction is in theory. I suppose many other factors affect here. For example, The vicuna 13B model seems to perform better than other 13B models I've used, even if both have the same number of parameters.
And sadly, this relates to the size of the model, and thus is limited by the VRAM you have. There's other models that run on CPU, and you can also split it, but in general personally I'm limited at present to 13B.
reddit
AI Responsibility
1682528509.0
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jhspuqw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jht26c9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jhsqwc5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_jhsre0c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_jhuh106","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]