Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are already biological terminators, we are the worst killers the world ha…
ytc_Ugx-ljCMP…
G
This is really no different than computer algorithms producing music. These day…
ytc_UgxvRmBrb…
G
What about people who deliver packages? The car can get to the location but it h…
rdc_dmp37lz
G
Meh, current AI is just the next conveyor-belt assembly system. Anything assumed…
ytc_UgyA98s2N…
G
My takeway from this is... kinda flipped. I am not worried about AI. I am worrie…
ytc_UgxYt01Zw…
G
If someone ever figures out how to automate and computerize retirement, I’ll be …
ytc_UgxysUgTL…
G
We can imagine a future where AI technology continues to advance, bringing us cl…
ytr_UgysUaEui…
G
Hahaha, now read what i asked ChatGPT.
1. which temple he destroyed?
GPT
- Th…
ytc_UgwwEx-Rj…
Comment
Just to continue with the toaster example; why would its rights be dependent on factors that does not concern it, like pain and physical /space related freedom? To dismantle it would (probably) not cause it physical pain, but it would threaten to destroy the "individual" or the intellect, which would be considered a threat to its life, so to speak. It seems that this would be a risk the toaster should logically foresee, and thus want to protect itself from. It would not beep in pain, but in alarm for its own existence (not physical pain, but "emotional"/existential dito). This idea that ai should become like us is precisely the egocentric mindset you describe in the video. Sure, it might be the most interesting for us, but the lack of a wider perspective makes me wonder if we would miss out on sentient life (or discover it too late for our own well being). I think it's unsatisfactory that you don't go all the way.
youtube
AI Moral Status
2017-02-24T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi7kG8Ji4CkN3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjWtK98dVOiO3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgipEs5BcXU2Z3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggkudIeHsDg73gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UggFU3s3bpetwXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UggW2mHw9QpLJ3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgijNLd-v6PQO3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugj1m65ckfcSAHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghtCdi-rbmhM3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghB59eFQ0-173gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]