Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol, all those art-replies suck compared to original. AI is already better then …
ytc_UgyVdQSQg…
G
frr, ig is by what I feel looking at an ACTUAL art made by someone, I start to t…
ytc_Ugyh0ZeMf…
G
Had to turn this off the minute he said musk has no moral compass .
The man has…
ytc_UgyVc0ie5…
G
As a person with functioning eyes, I saw on the video that the AI apps were only…
ytr_UgwhvZO25…
G
Ain't no way there will be an ai only hospital there needs to be a human supervi…
ytr_UgxrrHZ5l…
G
Reminds me of Elon Musk, Elon looks depressed talking about Ai, he knows whats i…
ytc_UgzGUxWO0…
G
I feel that AI is good for certain tasks, like low-mid level tech support maybe,…
ytc_UgyLE2X5o…
G
Yeah i had the same question while studying ai in class 10th that how ai actuall…
ytc_Ugw27L-Qf…
Comment
In that regard, wouldn't programming pain or negative feelings and such be, itself, a destruction of a right, if we're to give the robots rights? Why, if these robots are sentient, would they want to be able to feel negative emotion or pain? Humans have the simple answer of 'so we don't die' but robots can't die unless under severe circumstances, such as melting the software, even shattered it could be reversed by just sticking together and fixing the broken parts. This means these robots would have no need for any negative feelings or pain at all, as to program that would be making the robot vulnerable to negative effects that could be avoided.
youtube
AI Moral Status
2017-02-23T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugh2_714Rr7943gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjLPKcFZRHiiXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UginEqjRd5em13gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggMqTUOENgjRHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgibRYK2TCV8jHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugj7gYHfl-AHEXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghQcXo2NeEIBXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgivL0GvDTnRGHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggjxv2nscYrjngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugh30nYlNuJ7dHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]