Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
everyone here complaining about ai art but had no problem downloading music ,vid…
ytc_UgwIVluCV…
G
You are focused on narrow AI. While there is AGI and ASI. ML is on its way for A…
ytr_UgxFj2epL…
G
Me desperately trying to be understood, turning to ai when humans arent listenin…
ytc_UgzqOzNg7…
G
The jobs of the future will be replaced by AI. WTF are you talking about?…
ytc_Ugzw0Azmo…
G
A lot of artist are happy to have their work give inspiration to others. If they…
ytc_UgwMhYNNB…
G
This is nothing compared to gathering information online, and using predictive A…
ytr_Ugx7Vyi5Q…
G
I call bullshit. Every time you try get AI to program itself it starts spinning …
ytc_UgxbpjZv3…
G
I’mma be real. We made ovens so we could automate the cooking of food. We made v…
ytc_UgwGsyzZp…
Comment
Common sense. It's so bloody rare --- It's a super power now.
The 3 laws of Robotics.
Isaac Asimov.
1/ A robot may not injure a human being, through inaction, or through inaction allow a human being come to harm.
2/ a robot must obey the orders given by human beings except where such an order would conflict with the 1st law
3/ a robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd law.
He later added a 4th law called "Zeroth Law"
4/ a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Isaac Asimov cited this as an ethical framework for artificial intelligence and Ai enabled robots.
Obviously bereft of something so essential, lacking in common sense, seemingly desolate in any regard to humanity's safety, we have so effectively disempowered ourselves in the the biggest freaken act of seft sabotage, humanity affectively has taken to dumbing itself down to a new level. Carl Sagan is rolling around in his grave for bloody sure. How embarrassment 🤔😒😑🫣
youtube
AI Harm Incident
2026-01-20T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyoSty1_mNkNqHO9Dh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPj4GekG0i5LxvSjx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgydOOxNxBKCwzMBQ0h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwf9BpaAtlQ2obI9a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeRNSDouehBfNZF_Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxWcN6DSSevfaqLHl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyXYttEDDBQcLA1nNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxvtYgslKIUYEfL4rV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVuMDLqKVIGfh0aMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLXm4nk7scaQwrI4N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]