Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I honestly don't care. Therapy is expensive and ineffective and exploitative. I …
ytc_UgyqDKNNP…
G
The problem I see with AI is it will never say "I don't have enough data or trai…
ytc_UgwslIcni…
G
I have been to two projects using AI and both disldnt prove to be accurate. Most…
ytc_Ugyr42TQg…
G
I am AI Artists and I’m a Digital Artist. I am Both an I love it…
ytc_UgyZK0G26…
G
I still don't understand how Shad could get so wrong as to defend the idea of wh…
ytc_UgwqnzjW0…
G
As soon as I clicked away from this video, I got an AI-generated ad. Like, broth…
ytc_Ugxj2JY7O…
G
Whoever puts 100% trust in AI to be truthful/accurate is either being unknowingl…
ytc_Ugz96vkUd…
G
if the people fanatically supporting the latest fad are mad at you... you're pro…
ytc_Ugzb5zpB6…
Comment
The issue you don't seem to understand is that Chat GPT is incredibly deficient in ways that have nothing to do with "gut feelings". Despite having a wealth of information it cannot integrate it all in a meaningful way, the way humans can. Take the example with CHAT GPT removing the comment after making it. If it really understood the world it existed in, it would know that it's futile to do so, quite the opposite, - that deleting the comment would alarm humans even more. And this is what is truly scary about AI, without a hierarchical integration of information according to its usefulness, - we cannot really trust it to make any kind of decisions which have serious, real-world consequences. It has acted unpredictably in the past, it has given totally wrong, misleading and inaccurate answers. It has lost almost every game of Chinese checkers once the player it played against changed the way the game is typically played. It had no frame of reference for the Larger Whole, it cannot understand the abstract objective of a Chinese Checkers game, - how can we expect it to have a good abstract model of the world it finds itself in? And yes, I do agree, it needs to be studied further and the pace needs to be slowed down, but not because "it will take over the world and eliminate humanity". Because eventually it will do something impossibly dumb which will cost thousands of lives.
The reason why "existential threat" is unlikely, - is that AI on its own is extremely vulnerable. We can thrive without it, while AI couldn't possibly exist without humans. Coronal Mass Ejections, Asteroid impacts, climate change, - all similar cataclysms have greater chance of survivability only if both machine and biological-based intelligence exist side by side complementing one another. The kinds of "creepy conversations" are simply the result of AI's creators knowing about the hierarchical information integration problem, about the inherent "Autism" of AI, and of them attempting to create a more person-like "frame of reference" for it. While it solved some problems, - it created more obstacles to machine learning. The inherent issue we are facing is deciding whether we actually want "copies" of ourselves with all our inherent Cognitive Biases and illusions, or if we want a more bias-free intelligence. Can ANY intelligence even be completely cognitive bias-free? It may well be that Complex Abstract Cognition, that which makes us human is unavoidably packed with biases, but in that case, - what is really the point of creating a "conscious" AI? I don't think it would even make sense to create something like that until we have these questions mostly answered through decades of study.
youtube
AI Governance
2023-07-26T10:1…
♥ 28
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbCYkYZbFFapc6TnJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjNo1T1dBkNoK0CRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUzSZ3CORB3433NEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwd3IU3k7Wl4nmu6Fx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_9pTY0-1LQhZgsE54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwg_gF5yGE8CGeqThZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbQ5dKOO_H_A-xQCJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTJzR5jMhc7-OfCKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYnGt-lAAY-P5IQbZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7ZoEkfjxKNsawjGB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]