Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way to approach AI impact on society is through the profit that generat…
ytc_UgzRkXCJx…
G
If a "google" engineer is training an AI robot not to be biased its automaticall…
ytc_Ugx1VupKO…
G
Me knowing one day there will be all the smart vehicles self-driving cars but th…
ytc_UgyL5uVwy…
G
So he starts saying that US thinks Covid was created trough "Gain of Function Re…
ytc_UgzdMxIhO…
G
The world is gonna end in 2002 anyways. I mean, 2012. Err, 2014. Oh no, sorry, t…
ytc_Ugyk7kEow…
G
Have you heard of Agility Robotics? They are building the first humanoid robot f…
ytc_Ugy2iovMw…
G
Yes. Jobs will appear. But we won’t be the ones doing them. Why for? AI can do i…
ytc_UgwF5w4hs…
G
You were not talking about the law, but about ethics. Ethically, using your work…
ytc_UgxGZS7Or…
Comment
There's no reason why AI can't have feelings. This is because we humans also produce emotions as a result of mechanical physical phenomena in the brain. We are machines at the molecular and genetic level, and therefore we humans are examples of machines with feelings.
youtube
AI Moral Status
2023-05-02T21:1…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzQY1iuRYpn1fTo0eV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIkjWWaaEThEaUCzp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5S4HjnzkfXUQaLJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz54IlsrP8TuMonZ294AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxX8n-g2kwAAybIYep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwb1MdBsyTStzB69TV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwrdVfHifUxYuEtUGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzn8hc-QmfLr0f0kMx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyi-uBCLnQa3Hckx6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHNccXKOlTdxolVXF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]