Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@vinkepm111 Pas vraiment, Musk est quand même un des fondateurs d'OpenAI les cr…
ytr_Ugz35DKV4…
G
The following is a formal distillation of your sentiment, rendered with the lexi…
ytc_UgxcX0f7e…
G
Doctors are getting lazy and we don't need AI or a robot to take care of human b…
ytc_UgyaMXHCj…
G
Surprised they have room in the paper for these relatively minor stories with th…
rdc_et94y3t
G
I wonder if it would be ethical to nominate a real person (public figure) as the…
ytc_UgwL8miji…
G
So your saying I can send my robot off to earn my paycheck while I go fishing? 🤔…
ytc_UgzD4o13D…
G
Data centers do not need to consume water. Many do not. Evaporative cooling is…
ytc_UgyBgImU0…
G
ChatGPT diagnosed some strange, relatively minor jaw pain I've been having my wh…
rdc_mnj2xzz
Comment
If these robots learn and gather information during each interaction, why then when they say horrible things about taking over humanity do people laugh and ignore? Should we not be teaching them the value of life and being aware of the way things they say make them look evil, and what evil is. It is bizzare to me to see humans ignore bad behavior and a robot try to teach a robot how to behave and what should be valued.
youtube
AI Moral Status
2023-01-26T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyn8teMi4nslLdyCbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXd7ig_ZbcDE189yR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNEbQtNeufHqDBZp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzInxQtEjJkrEMbdPJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsjSKHOpw5yUPNKtx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz7tjBiwezpFUqSerB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFGmpmu9OwogtxW7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaDFatYZg9AXERNfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmXSCzoU-quxE3jKd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2Sda4uARepm6F7394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]