Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Much, much different set of problems there. Ti get geothermal energy they are tr…
rdc_ogr5vja
G
Something I’ve noticed about ai “art” is that no matter how good or well develop…
ytc_Ugx1bdeHU…
G
What do you mean you won't be touched if you know it's AI? If that's the criteri…
ytc_UgwrabCtr…
G
wait until a heart broken , brokie , insecure engineers secretly makes it autono…
ytc_UgwOwCIGf…
G
Ai bros are simply mlm simps who rely on fear mongering and hopelessness to try …
ytc_UgzVogj9n…
G
One thing that I feel is not touched upon much at all is ethics _towards_ consci…
ytc_UgwQh6Ubi…
G
AI's threat to humans is directly correlated to those who program AI. Greed prog…
ytc_UgyKGAh__…
G
@canned_kuchie4724 If we ban porn deepfakes we have to ban all deepfakes. This …
ytr_Ugx6yU1av…
Comment
As long as there are very VERY strict boundaries on self-awareness and self-induced-evolutions in AI, this really shouldn't be a problem. AI shouldn't feel entitled to rights and freedoms unless it is programmed to. We should absolutely keep this conversation going if god forbid an AI becomes independent and self-aware to the point of being recognizable as human, but it's unlikely unless some evil genius tries to spark a machine revolution like in the Matrix.
youtube
AI Moral Status
2017-02-23T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjfVoL_clccOHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgiPI-YOPMt3eXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghKTXEJdE2k03gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjzKBW0d4zvsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggzWaALjepZ8HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugi1-8Q9o8b7SHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghZpuKPn1eld3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgizDdmtVR9s7HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugh9tM2DGn-Y5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggry-BHMQAuF3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]