Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tested your theory that AI couldn't make anything new by opening ChatGPT for one…
ytc_UgzZHt9Dj…
G
And this is why it's not smart to make vital national defense policies into poli…
rdc_dkznc6b
G
If this mans intelligence is going to be matched by that of AI humanity has noth…
ytc_Ugweg6mYL…
G
Chatgpt is a type prediction language model. As much as we think AI is advanced …
ytr_UgyeWjqtb…
G
Wow, those clips showing the dystopian future that would be forced upon cities b…
ytr_Ugwu7Bamc…
G
In a way yes, however AI is only as “creative” as the input provided. Imagine if…
ytr_Ugyh5NxMi…
G
If you find these trends concerning and you want to make a difference, you can g…
ytc_Ugz3uAY5b…
G
@dirremoire I mean I’m just trying to have compassion. I’m not artist so I don’t…
ytr_Ugzt1X-ul…
Comment
@inmundo6927 I am not saying that language is not important, I am saying that there is a need for much more than that. You can call all those aspects "language" if you please, like calling motivation the "language of hormones", however there is no such "language of hormones" in a transformer architecture. As you say, if an AI lacks memory and logic, it does not work at all as an intelligence. All a transformer does is a lookup into a huge statistical database, taking the last 2000 words as the query and spitting out the most probable next word. Then it takes the new 2000 words (remember, we just added a word at the end) and repeats the process. When someone asks a question, those words are pinned to the end of the 2000 words and the database query is repeated again to generate the first word of the response. That is all it does. There is a lot of very clever stuff done in the generation of that statistical database, but once it is built, it is fixed and never changes again. However, remember, that database is created from a huge corpus of text from the Internet, text that was written by humans with true intelligence, emotion and reasoning. That is why it seems so smart. The statistical database is cleverly designed too; it doesn't just repeat verbatim sentences from the Internet (though that sometimes happens). Instead it calculates the probability for each word to follow the 2000 previous words that have been said, given everything that has been fed into it from the Internet. There is no memory, there is no reasoning (apart from being lucky and finding the answer in the database).
youtube
AI Moral Status
2022-08-19T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgzyUC9vDRwoGUF8PK94AaABAg.9elTPegi30J9euprDXMrKp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzyUC9vDRwoGUF8PK94AaABAg.9elTPegi30J9eusz6CA5ux","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzyUC9vDRwoGUF8PK94AaABAg.9elTPegi30J9euzAlBcmzF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxST4abrGwJbqD0KpV4AaABAg.9eRwXLFnbHn9eYDX0CVUGq","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxST4abrGwJbqD0KpV4AaABAg.9eRwXLFnbHn9ebyiWsz5Am","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxST4abrGwJbqD0KpV4AaABAg.9eRwXLFnbHn9fMscFfXm2o","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyKRQoBLt6iYuTASJV4AaABAg.9eRiwSZYYdx9eTU73rs871","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgwbwxlL2z8rLzmeS_V4AaABAg.9eR0Ak9_VIe9eR4hq1EPv1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzIWI1KIzqTzQKayWV4AaABAg.9eQHRBDxYCV9eR2gb5nFPs","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxZdrKOZjkSgKUBbZB4AaABAg.9ePuIOTgsUA9eQ9uhXFtQN","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]