Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i hate the concept of "talent".
Imagine a scenario, where you are competing aga…
ytc_Ugxp1qAgh…
G
This is literally proving the argument of AI Art being useful since you can use …
ytc_UgzYuVn7w…
G
Poignant video. IMHO the real issue is shared responsibility between a human dr…
ytc_UgxDyofCh…
G
It may change soon, who knows, but Ai is the fitness equivalent of supplements a…
ytc_UgyiT07O-…
G
Artist and AI shouldn't even be in the same sentence. Maybe if we start from the…
ytc_Ugw3AzLhA…
G
OpenAI is burning $Billions with no path to profitability. So yes I predict 100…
ytc_Ugz3_YDHh…
G
For now, try some alternative models, Vicuna or OpenAssistant, ...
What you are…
rdc_jg75s2w
G
I wouldn’t worry about ai too much given that Israel is bombing Iran and Ukrain…
ytc_Ugx1mBjUn…
Comment
Let's be clear - this is reactionary hype -- there's no analytical thinking behind a Large Language Model. It's just got a LOT of answers, some right, some wrong, to a particular question. These answers were developed by humans. ChatGPT (and other LLM's) cannot "think up something new". They just take what we just take what we give them. They're like "Clever Hans", the horse that took it's queues from it's handler. Sure, it looks impressive, until you understand how it works. It's a more powerful search engine. The problem is it's also waaay overconfident in it's answers. That being said, you don't want to create a SKYNET or Strangelove situation, it just makes no sense. There's always a kill switch. Re the "I hate humans", guess what's on the internet -- the script from Terminator. When they say "Large" they mean "Massive"! And.. there's scripts for "Mr. Robot" and summaries of plenty of other doomsday scripts. Again,it's just Clever Hans, telling you what you want to hear. Eventually, you get into the hallucinations, like the Sidney interaction. Remember, there's detective novels, harlequin romances, UFO literature, etc out there. The one thing LLM's do is to show you that the knowledge base you feed in is critical. Garbage in... Garbage out... BUT... THAT'S NOT AI - that's just a stimulus-response loop. The only AI~ish thing you see here is the neural network. Feed in a bunch of inputs and once you reach a particular point, you can depend on it to react in a particular way. Again, "Intelligence" is not the right term here (although it's frequently used), it's just an elaborate switch. The Drone story is misleading (search "Guardian air force drone killed operator") it was simply a training exercise. If you tell an autonomous car to drive to a place ignoring the roads, you'll get similar results. The input training set is important. And you don't put low level switchgear in charge of launching nuclear war. You've got way more to worry about from incompetent humans than rouge AI's. And I'll take a well trained switching system over an incompetent human any day. Real AI is still hundreds if not thousands of years away. Sorry SKYNET fans, these are just clever horses. (for anyone interested, the book "the Adolescence of P1" is still available on Amazon, way more plausible than the 'Echo' story, but still just a story, for a more benign view of AI, try the "Culture" series by Ian M Banks)
youtube
AI Governance
2023-07-07T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3mWuCH4-scCmsTPt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUNuA0_3rPUXj4z9F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzf-v1RLJeP1jQoFk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjwmiNVxCdSNyWPx94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwmwgfLO-EMaF3b-9R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzDKVjVZdMgnSlvkI14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7Zp0J8-yWvCRcQZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRyTPK0k_F91BQO8l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz4a4UPqD0Gq1BSi5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuK5nA_7GEgBBOqct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]