Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@piorism > *there are already ways to get inspired without relying on AI-vomit*…
ytr_UgwnXhLEp…
G
AI doesn't have common sense or reasoning skills this will be a disaster and we …
ytc_UgxhbWXUg…
G
AICarma shows me which companies are hiring for AI roles; it’s been a real eye-o…
ytc_UgxLL7Iwf…
G
The cyber attacks will eventually be ai attacking ai trying to eliminate each ot…
ytc_UgwzV6Gx6…
G
Other AI: A life is not for sale.
Chat GPT: You got 14 bricks right there?…
ytc_Ugwo_DoYu…
G
More crucially, "AI" cannot make **decisions**. The process is fundamentally ra…
ytr_UgzNTLmcN…
G
How can they have a fair jury if most people don't even understand AI in the fir…
ytc_UgxmJ4sXu…
G
Im will throw her a bomb...
Hahaha... to make it sure this robot wont exist…
ytc_Ugws26Wyk…
Comment
Before true sentient artificial intelligence happens, many scientists predict that something equally world-changing will likely happen first:
Artificially-Enhanced Human Intelligence.
Many speculate that technology will probably enhance human intelligence long before we can create a truly-sentient artificial intelligence. This is because we're still not 100% sure how to properly classify intelligence or sentience despite knowing it exists and perceiving it.
It would be much more difficult to create a sentient mind from scratch than to take an already existing sentient mind (AKA, a human mind) and just giving it a boost or upgrade.
So before we have to worry about human rights, we're first going to have to deal with the ethical issues surround human augmentation, and all the economic, social, political, and business issues that come with it.
youtube
AI Moral Status
2017-03-27T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghrtkIaEYufGXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghCvNhEHN-AcngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiOBS6RkHXMSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgjwY2J2WgKWg3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggUC5VN_TTCq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6otA0YsK1H_oU8AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyIkqhIIjdH2ymktNx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_WZU3jCe3MYPLU4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxAkrhfdggp5M7Mml14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1Tu2KEOx1r2EjJj54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"}
]