Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is a tool that can be used for good or evil but my dission is everyon…
ytc_UgxsclTO-…
G
@movullina ...which is subjective, especially in the creative world. At the risk…
ytr_Ugyoa2EFU…
G
I asked Google Gemini if it can pass a turning test. It told me it that it canno…
ytc_Ugz079Izi…
G
All you have to do is ask it the AI whether or not both Zionism and Nazism use t…
ytc_UgxAhnsb3…
G
As a Tax Litigation lawyer , AI could/ would be a valuable helpmate ( re legal r…
ytc_Ugzwu05i1…
G
:scientist smokes blunt: So this here is Pepperidge Farms, she's an AI and I mad…
ytc_UgxmntLem…
G
"godfather of AI" lol, if this AI recipe has a godfather, it's the search engine…
ytc_Ugyjj9DLH…
G
BS. There is so much AI haters, that if this thing worked, every popular model w…
ytc_UgxcG21i2…
Comment
The more we learn about psychological mechanisms and how different people experience different and similar things, the more clear it is how little we actually know about how our or any other animal's brain work. He and the ai developers tried to recreate how neurons work according to him and based it on a projection of a lot of human experiences. That is by definition, a simulation. Due to the data and the people it is trained on and by it will word with the projections, be it words, images or the way they coded in things it will have emotional data in it. As far as I know we dont CLEARLY understand how emotions work and what we understand that they have a LOT of factors. Even if we only look at senses, it does not have a smell or touch and definitely not the way we do, and those two definitely play a big role in how we feel and experience things, what the ai is using are an accumulation of our telling of these with us not even understanding these fully. We also supress emotions or unable to supress emotions and that has effects on how we feel later, what bias does ai use when it simulates emotions? A smaller human can overpower a bigger human based on their knowledge and even by pure will, luck sometimes, why would an ai robot need emotion to assess its situation getting into a fight with a bigger robot, it just needs to understand if the other ai is capable of defeating it based on its "knowledge" and other stats.
It is part of his lifes work so I understand if he is emotional towards ai itself. But from our current understanding, there is no reason or use in anthropomorphizing artifical inteligence. It is an intricate large program/simulation, not a person or animal. It does not have flaws or intentions because it came to that conclusion or it thought something, it has them because we put that data in and programmed it that way. It is actively hurting our brains tho, why would you need to think while searching up things on google or something if you trust it or not, if ai just gives you an answer without telling you where it is from, because we talk like that too usually, but when we talk you have the other person there you can get to know if you can trust their advice and which ones. We sometimes exaggerate or add information, opinion to make stories more interesting or we don't remember clearly something, ai does that too, while it has the exact sources and it does not have to, not because it can but because it is simulating us. And it will build in these biases later as its new data and if some people just take it as granted and write about it it will just poison our knowledge too. Ai is not clever, nor experiences things, it is just simulating a superhuman brain library, without having guidelines or rules or emotions like we do.
youtube
AI Governance
2025-10-10T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgygPeauPMe1QwwLvyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw6EcZiE2dAmcm3wwV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyeoTGJhDL4yAgLnyx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPTSkG5zOY9lrARNV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx9EKDu09jwcSm_Wj14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytLPwycgMmiuT3xCF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOOBHfw519mJMUeil4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyAzTc_3Gzw7zd5WGN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxNP6RGmmP-gAmhRRJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4KlCqGo1xDJ93F1N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"})