Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why giving the machine of data mining some wild goose chases is exceptio…
ytc_UgzEXFKti…
G
The biggest argument I think for ai art is that it's the easiest way for someone…
ytc_UgxCzSfTA…
G
ai doesn't learn it associates shapes to keywords then reconstructs those shapes…
ytr_Ugx164cP7…
G
Yes, it probably will take over service jobs, but at the same time it will leave…
ytc_UgxTak9rA…
G
It is so with drugs. One can get dependant on it instantly. It is one of dangers…
ytr_Ugwf1fBiy…
G
The only problem with robotics as with any machine is energy. They do not have …
ytc_UghOyP1zg…
G
The way I put it, the problem isn't AI taking our jobs, the problem is we view A…
ytc_UgxRid0-p…
G
Imagine asking AI "do you understand what you are doing?"
and if it says "yes",y…
ytc_Ugwfi0y9n…
Comment
Scientists are trying to duplicate a human mind by providing it with vast amounts of data. Indeed, the amount of hardware used to train LLMs is staggering as is the SIZE of the dataset. ChatGPT4 had 1 Trillion parameters for training and was trained on over 45 Terabytes of data. The output of that is then, for all intents and purposes, "the AI".
Consider how a child develops. Each and every child goes through a common set of training in order to succeed. No child can survive without a caregiver. Children learn to survive while also learning abstract concepts like science or math. Eventually, if everything goes well, the child ends up as a healthy adult member of society who is capable of caring for itself, and perhaps many others.
What could happen if you put not only the knowledge, but the analysis of the knowledge into a 5 year old kid's hands. They would immediately become intellectual adults. THAT is the problem. The MODEL will always be put into the equivalent of a new born child. This is EXACTLY why they are afraid. In this video, one person worries that we might not know if AI is being deceptive. Why? Because when you apply weights to data in order to arrive at an outcome, veracity goes out the window.
Like a good conspiracy theory, you can find plenty of supporting information.
youtube
AI Governance
2024-04-12T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjsZavbDnuZjvsZh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVShchXzguWy4sndh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAlW58yfa0-uqD5_d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweoTnhEkKW6mGpdh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDhcWSWJ_VDyjOTLh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBJIYFXoP-8eFz9X14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQYDQVI5opZKhzUfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxcUKe3rphgkBmbJ0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdT3oTjob9BznSVaJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPNCl042wrLRYvzLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]