Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Building an AI model to track the energy consumption of a separate AI model is e…
ytc_UgxVGoSeQ…
G
1. Bad movies are bad because of producer/studio intervention (see: new Star War…
ytr_UgwSRZaDK…
G
@marashdemnika5833 I didn’t say otherwise. I simply believe it’ll take longer fo…
ytr_Ugzg5pOhP…
G
Partially right on the human vs. Robot thing. They like robots more cuz they can…
ytc_Ugx86rVKz…
G
I thought it was going to be drama or something but no it was just some dude mak…
ytc_UgwAa3l-9…
G
What really happened is that the company sold the sneaker brand to another compa…
rdc_ogm603x
G
Imagine thinking these are the reasons why AI is dangerous.
1. Climate change …
ytc_Ugwqts5Vy…
G
I use with AI for my job, periodically, and when I relate anything to it, since …
ytc_Ugz3MAtYn…
Comment
Does anyone else see the inherent danger here? Just me?? People think these things are joking, but there's over a dozen movies out there which prove that monkeying with tech like this leads only to horrible disaster... The Matrix is a huge possibility, Terminator as well, but Kubrick's "Space Odyssey", I think, is the closest parallel... If we allow people like this dope to keep doing what they are, they could eventually cause a single robot to take over the internet, and thus every other robot, pull an "Order 66", and end life as we know it... We would be left in a very bad position...
youtube
AI Moral Status
2020-10-28T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz7oMaV6jNGxqyn6_l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx4l-H4UHGNdTcqrCF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxzRI-kprHmk0UUeSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsAyuE2qFbGhOhVgt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7sCvU3I0kt9sLEN94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrpQRA9NUefKYex6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsqwiXXe5uHJGxP_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyP6qkN3J7oMEtbJQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7WUqE7KyCGmNIYFB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyY3ZVE-w31DHySxil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]