Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand why the jobs of the little man have to be replaced. Can't the…
ytc_UgyWwRVQ4…
G
😂😂😂 it's all fun and games for the wealthy and for bs AI until a massive EMP or …
ytc_Ugy6ng470…
G
There is no problem with AI art in the first place, the problem is the one who u…
ytc_Ugz-IcCkR…
G
If we were to live in a world of ai it may be slightly beneficial at first but a…
ytc_UgyAtMoXn…
G
I get where you're coming from with all the ChatGPT and AI talk on this subreddi…
rdc_jabops6
G
The villen in Matrix and Terminator films is the AI that becomes AGI and then d…
ytc_UgyZ78eVH…
G
One thing to remember, the people on the side of AI art/ AI will replace artists…
ytc_UgzJjfMgY…
G
My ai chats...don't touch them if your my friend....I have some really nasty stu…
ytc_UgySiu84z…
Comment
A few thoughts:
1. During the initial phases of a growth process, everything always looks exponential. And we always assume it is exponential. But then, ALWAYS, it turns out not to be exponential but logistic and flattens out. During the initial stages, you cannot predict when the turning point will be or what the final level will look like, but that is what ALWAYS happens because we live in a world of limitations, so true exponentials don't exist.
2. When has a technological development been possible for humans and humans said, "No, we're not going to do that" and then actually abandoned the attempt? Even with nuclear weapons, we developed them and used them and THEN said hey let's stop, and even then we still keep trying to develop them more.
3. You mentioned at the beginning about paying people to be bad at something so they can learn how to be good at it. How can we "solve" AI unless we are prepared to have bad AI first? I don't see a path to getting good AI without releasing the bad AI out of the lab so we can see how it interacts with the real world and then trying to fix it. I think if we took a closer look at history, we'd find lots of examples of Unintended Consequences. Like how many people died in auto accidents before we decided seatbelts would be a good idea and then actually got the political will to enforce that? Should we have, if we had foresight, said "Let's not develop cars until we can make sure they don't crash and if they do the people inside will be safe and if they aren't we can get help to them as quickly as possible"? Would such a thing have been possible, or would it have hindered the development of automobiles because the whole problem was too big to solve for one group of people (with one set of funding)?
youtube
AI Moral Status
2025-10-30T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwii2xL_wLw9X4m5sB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwe9A9OhO5R7E63gnF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMXjeBo75O87r3vyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqrJRQK1baOhiKY994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCF-XMAByCkSJHexp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyekbg08B8sdfUGvkR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzaG_vHof0oO2dScVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgweU0HcOoZtKj0W0094AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyu95NI4Me3QQ5E1cl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJnj2av-p6Wwq3Owh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]