Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai makes a really good jumping off point, so ai assisted art can be a force mult…
rdc_jwv3yir
G
If we become educated about social policies that can help people live healthy, p…
ytc_Ugy57Vdvx…
G
@PhannoPuffYeah I see what your saying with that. A smart way would be to train …
ytr_UgyQfI5ko…
G
My favorite part of this video is the ad breaks.
ChatGPT and Grammarly allllllll…
ytc_UgwC7d6iE…
G
AI.. Means we may be treading on very thing ice. It could prove to be beneficial…
ytc_Ugw0vguiN…
G
„don't see it as a replacement, but rather as a tool that makes art creation sim…
ytr_UgzyhyF9R…
G
Ezra, thank you as usual for this conversation, but I don’t think you’re digging…
ytc_Ugz1jTThp…
G
Ai art is bullshit. It’s not even art, it’s something generated. Art takes effor…
ytr_UgwYsgTvj…
Comment
My goodness Hank, lets be frank, this video contains a bit of mysticism woo woo in it (with all due respect). When you're speaking, "my word, all these knobs being turned what could possibly be produced... are we safe from a superintelligence?" All I can think of is the following: An Ant Colony arose from randomness. Do I understand how all the ant knobs work to do complex tasks like hunt, scavange, feed the hive, terriortial wars, switch out queen ants, etc? Heck no. Am I afraid that ants will terraform earth into a giant ant hive? Heck no!
AI is metaphorically an ant hive with complex behaviors emerging from randomness tunned by a tunner (instead of years of evolution). Effective, yes! Just like an ant colony (more people should be wow-ed by those... if only they could produce text that causes emotion instead of doing boring things like survival). But Ant's don't play chess or do taxes - and neither can AI (until we evolve them to do it).
I think the question is less about how the AI models work (it's the beauty of multi-dimensional complexity) and more about what kind of outputs are we are training them on. As you mentioned in this video, the Charlie Munger quote, "Show me the incentive and I'll show you the outcome," is on the money. I want my AI ant colony to produce text with an ethical backbone so after years of metaphorically evolving to scavange food it better have learned to reject internet troll speak and embrace a Star Trek, ethical AI style speak.
And yes, I've watching documentaries like "The Social Dilemma," which argue compelling that large scale technology exploit the most vulnerable human's weaknesses in our society (think the Pizzagate scandal). AI psychosis is the exact same phenomenon. However, we are actively creating these AI tools (breeding these ai ant colonies?) and it doesn't mean we are going to accidentally stumble into creating the AI from I Have No Mouth, and I Must Scream. It is farfetched to think such an AI could be made with no warning, accidentally, and boom we all die.
No... I'll tell you how it might happen (which I expect it won't). It will be a lot more like the apocalypse from "Don't Look Up." There will be obvious warning signs that educated scientist will inform everyone about: "ChaptGPT version 3012.1 has demonstrated unethical tendancies and the current administration has given it all the priveledges to hyperoptimize our society. We are at risk!" And they would be ignored because version 3012.1 is highly profitable or there were emotional (wrong) reasons to reject their warnings. Then, after years of rejected warnings, the AI superpower arises, exactly as warned. Not overnight, not because of an innocent mistake, but a series of human failings allowing AI to be more and more evil (for lack of a better word) with more and more privileges over plenty of time to avert it. It's that kind of disaster (global warming another good example) that get us as humans, not accidental stumblings. I think an AI apocaplyse would be the same.
youtube
AI Moral Status
2025-12-15T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzaK9RNHKiWaV_aPGN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxxD7Q0KpXKevMSPGV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0poQ7gN6BToAA5Wd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-jdDDae2aqjVCl9l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzE8gN4n-Abpq31PzF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyBqXyxMQw4Oxkcz1N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwo0YRFtuEC4BPnCct4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgygO5o40fziXiaOQLd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyrXn3soKpF1-gdm_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwz52QGrfTTHucPDzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"]}