Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The patient is entitled to know everything being done to them and why. Were not …
ytc_UgyAcbwYW…
G
AI Bro: I call this, Bold and Brash!
Everyone: More like, Belongs in the Trash!…
ytc_UgxpJXwFA…
G
Haha, I'll choose AI over a Doctor, Lawyer or any other lying bastard, who has a…
ytc_Ugw9oQQzT…
G
I love your input on this topic! I share a similar vision... AI might replace 80…
ytc_UgxYP7270…
G
see the way i see it chat GPT could be great for serialzed show, things like Pow…
ytc_UgzDXqCdA…
G
I think it is as with anything vaguely controversial in science, that it is taug…
rdc_chnaclz
G
It's also very informative when you carry on that conversation with the AI to di…
ytr_UgyIoP1be…
G
I can't understand why there are so many alleged experts shilling for the AI Bil…
ytr_UgwKelZC0…
Comment
If you understand how the current 'large language models', like GTP, Llama, etc.,work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded...
The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally.
What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities.
My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans.
I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies...
Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient.
It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems.
One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet.
- W. Kurt Dobson, CEO
Dobson Applied Technologies
Salt Lake City, UT
youtube
AI Moral Status
2023-08-20T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgzlTTWrYwWDsjDvSyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzC8-o0wj5RZxg5xnZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzxKE7OjmVBIv7IrNt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwndTu85BjhyOwAxnR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwG-WK3nVSVotbiVe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"})