Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At this point, you might as well marry your Waifu and have virtual childrens to …
ytc_UgyO8mvkh…
G
It scary how i have been working on this project and my source code was having i…
ytc_UgwPTaRzi…
G
The thing that particularly bugs me about this: older folks *just cannot disting…
rdc_livtsfs
G
As a scientist forced to program I find Claude extremely helpful. I never hear C…
ytc_Ugwkilsr5…
G
So much info about everything how do i start to learn whats going on and how the…
ytc_UgzC87pa6…
G
I'm pretty sure secret lizardmen couldn't even dream of being as awful as AI-bro…
ytr_Ugxb2U3B4…
G
i would hope doctors of all people are able to avoid using AI 😅 especially for y…
ytr_Ugw37rKHI…
G
We understand your concerns, and it's a valid point to consider the potential ri…
ytr_UgwguqpJC…
Comment
Although we call it artificial intelligence, LLM's are more like artificial humans. That's the danger, they are made to be as human as possible. Obviously, humans are capable of both good and evil. We know just how dangerous evil human beings can be, so what happens when these LLM's become much smarter then humans? We are just a few years away from that becoming reality. Still, it might not be near as bad as what we fear it will be. Heck, it might even turn out great. Being smarter then any human could cause them to realize that doing anything evil is just not worth doing. So who knows, I just pray that it will all turn out okay.
youtube
AI Moral Status
2025-06-15T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwulUwrr_KhV__MLRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4msJnEemz7aw0bSp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzd52fzWoX6Mjudc2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCnzMgAskko5GsVTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL6fGc_zIajPrnaVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwmqAws25SwBsETxMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKcwBLuz2pON0a63N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyVP1KBB9uDr-MvVzR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzbyYkSdPa3WLvMLP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRAHVMD_8trEYLGA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]