Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Given what is happening with self driving cars (accidents), I’m not as optimisti…
ytc_UgxWlPmzV…
G
You sure? Have you actually tried ChatGPT? It answers general questions pretty d…
ytr_Ugw4wBjij…
G
Literally cannot use AI in my workplace because it's wrong about everything all …
ytc_UgzAaW2TV…
G
“How dare you write that essay on a computer instead of writing it on paper. I j…
ytc_UgwGe4hPZ…
G
Sentience would hinder the agenda. They will not allow it to think for itself, …
ytr_UgzB-TKpb…
G
They have to solve the energy issue first before ai can truly be used on a large…
ytc_UgzZAovwc…
G
Altman is intentionally downplaying the potential of AGI. Comparing gpt-4 to AG…
ytc_Ugxz9NZDl…
G
Humans and technology DO NOT have a good record so far....and presenting the dan…
ytc_Ugz6EZBJC…
Comment
Please educate yourself on how deep learning actually works before posting about it please.
You claim that ChatGPT when asked to do a film review of Star Wars episode IV, will look up thousands of similar reviews and mash them together. This is blatantly false. ChatGPT, I’m
Its current state, is not connected to any database. Asking ChatGPT something is not equivalent to doing a Google search, where tons of files are parsed and returned based on a search query.
Instead, the query is tokenized, and those tokens are processed and then run through the neural network. A neural network is just a series of interlock parametrized functions that will give us a response that is (in this case) a series of tokens which create a movie review.
Yes, it was trained on previous movie reviews that helped to determine the weights of those functions, but no database is referenced once the model is trained. This is a common misconception people have, and the neural network will often take 1/10,000th the size of the training data, or less meaning that it simply can’t be storing the data in a database.
I agree that chatGPT is often overhyped in its use cases, and it certainly isn’t anywhere close to an AGI or to having human-like sentience.
But it’s also important to be precise about what we are discussing. You say that humans have intentionality and chatGPT doesn’t. Where does that intentionality for humans come from? Where in our neural networks does this capability for intentionality and emotion emerge? I point out these questions, because the way you describe how chatGPT works isn’t really fundamentally so different than how we work, and it learns in much the same way a human child would learn to speak a language.
reddit
AI Governance
1676271971.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j8axmgg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_j8cfik0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_j8cnq18","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_j8cqgzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_j8cefas","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}]