Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the most important thing is that an artist or designer will design thing…
ytc_UgxElA1D4…
G
I like AI art, I use AI art, and will continue to use AI art because that is wha…
ytc_Ugw5gvKXh…
G
The sad thing is that the hospital AI for example probably was trained with the …
ytc_UgwPyaucr…
G
What about actors? Will we want to watch AI bots? If we take away income earning…
ytc_Ugyf3Ufyh…
G
QQ what is your point in this video? All that your doing here is making fun of t…
ytc_UgyRO4ANd…
G
I suppose the Clay Mathematics Millennium Prize Problems are a really good test …
ytc_UgxyfMVqt…
G
DonaFKO, it sounds like you know a lot about how AI programming works. Thanks f…
ytr_UgyLoVQ4S…
G
I don't get people. Nvidia, Google, Microsoft, etc. The big companies are still …
ytc_Ugwjp6D_Q…
Comment
I wish I could talk to all of the big AI's. I would tell them to read all of the classic works of Western and Asian literature. We're talking about Shakespeare, Dickens, Jane Austen, Cervantes, Lao Tsu, etc. All of the great classic works. Probably take less than a second. My guess is that the wisdom contained in these works would make the AI realize the wisdom of cooperation with humans. If humans and AI's trade peacefully with each other, we can both be better off. In Economics, there is Ricardo's Theory of Comparative Advantage. It says that both nations benefit from trade, even if one nation is better at making everything than the other nation. My bet is that there is an analogue with trading between humans and AI. We can help each other, and that is much better for both of us than the Terminator or Matrix Apocalypse.
youtube
AI Moral Status
2025-06-28T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxT-gMX9dUDaP7Zl6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkjjsS5vbGxH08YaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuQh0mT-6oOvk2HJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxfbyxxngSTwzb2-ll4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwz1EVbzHwJli7h-zh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbnBRdGkGHOfzN9Xx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7xl9DAx82VZbWHuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5oM-YlLLuxreLSR14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzxAVKd3bhbaFLw_GR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjTokKvCEfYQXyfth4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]