Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've always known Hank green to be a very straight forward and logical individua…
ytc_UgzAOJNgh…
G
I just cancelled my ChatGPT (Weezy) yesterday. There were just too many incident…
rdc_o8adei1
G
Where do we draw the line? It's pretty easy. It's the same line needed in any ty…
ytr_UgxZ7bHsO…
G
That isn't true at all, in any way. I spend a LOT of time in China and the Chin…
ytr_UgwEaZhkQ…
G
so, say there's an AI robot that watches for thieves stealing from walmart and i…
ytc_Ugwx_zlVh…
G
Popcorn time. Been having loads of fun with Dall-E 2, guess it’s time to experim…
ytc_UgxYBKKto…
G
Completely different beasts. Waymo "AI" will work ONLY on pre-scanned HD mapped …
ytr_Ugwl6b8Rr…
G
I mean Ultron literally needed to scan the internet for a moment to decide that …
ytc_UgxdLxoub…
Comment
It's too early to have a serious discussion about this.
One flagrant error in this video was the assumption that it might become possible to program machines that feel pain in the near future. Neuroscience is only just beginning to explore the connection between feeling pain and physical brains. It is nowhere near explaining how to tell if something is feeling pain, much less building something that can feel pain.
The media has been getting carried away by some recent successes in AI like self-driving cars and a world champion go program. These have nothing to do with, nor provide any hint at, programming a machine to feel pain.
youtube
AI Moral Status
2017-02-23T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggxBv6Bh68AOXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi6g4FkM0SElXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh1j66C9k7XO3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugik2MV5JbWHtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghtsfO07MMnfHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgiTS2v4li_yF3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugj5BBXR8r_1EXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ughby7Ihz3l8n3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgggfKyYxs8w4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughgv7iY07dgTHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]