Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It worries me that so many people use LLM tools like ChatGPT without understandi…
ytc_UgyTGMM29…
G
I blame ai art for my gambling addiction. I am doing penance by learning how to …
ytc_Ugz4lhCgN…
G
Its all nonsense, AI can only operate as long as there is power provided, after …
ytc_UgzlH2dJP…
G
Most of these LLMs were trained using data scraped from the internet over the la…
ytc_UgzL2M6wE…
G
I mean, there's also the fact that AIs aren't being constantly run, and that muc…
ytc_UgygsnL6V…
G
"Born with skill" what skill? Learning to achieve drawing? Huh?? Bruh it's the m…
ytc_Ugy7E_t-b…
G
Wait til you see the anti christ. This AI is demonic and opposing Gods creation…
ytc_Ugz5XTLmP…
G
How do you think where written before AI? Yeah they took your story and turned i…
ytc_UgzcUmxxJ…
Comment
The problem with artificial intelligence is that all its knowledge is based on the knowledge that humanity gave it. Imagine a universe where humanity has decided that 5 + 1 = 10, and the entire universe is limited to the galaxy. Let's imagine, in this fictional universe, humanity provided all this information and much other possibly correct and incorrect information to artificial intelligence. What is the probability that artificial intelligence, in the totality of all the information it has received, will make the wrong decision? Let's fast forward to the real world. Why did humanity decide that everything we know is actually the right answer? Artificial intelligence is doomed to make mistakes and will not be able to come up with anything on its own, since it is based on the correct and incorrect information that humanity has provided it with. Imagine a child (as an example of artificial intelligence), who spends his entire life, and lives in a bunker that is filled with right and wrong answers (as an example of our world). Now the child has grown up and everyone who taught him all these years gives him a task so that he has to solve their problem outside the bunker. What is the probability that a child will be successful in solving a question if he is trained on correct and incorrect information? This is the whole problem with artificial intelligence. If artificial intelligence is not capable of finding answers on its own, based on personal experiments, but is only based on the information that humanity has provided to it, then there will be no success.
youtube
AI Governance
2024-02-24T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVJEhOsSLwCoj8Luh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGu5WehEhELR51FQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwus2KddX8oM1GU4op4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyQ_bD9QBJe4fSUSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZALpmaznIwfIAtuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzoiJ686Fti3L-nSxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGKFuDvKfgvF-TQPh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxwAj1SDsfPoTSjxTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-R7u2DdzHAIpTSil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-8lfizm0lyNrGuKh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]