Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone born and raised in the US during the Cold War, it's taken about 40 ye…
rdc_h34cpcb
G
I've used AI for my design projects and i admit that its very useful but what i…
ytc_Ugx1LZ-UO…
G
Also Canva has zero AI. You seem to be out of your depth here. Come back with 10…
ytr_Ugzj1910y…
G
Idk how many of you have done deep dives with the OpenAI GPT releases, or what s…
ytc_Ugwa3TFN7…
G
They don't understand the consequences of their actions. Probably because they …
ytr_Ugz_oYWad…
G
A couple of days ago I asked ChatGPT, "What does it mean for a woman when she is…
rdc_jcbu8va
G
I hope when I get to the end of this video he explains that his video is a deep …
ytc_UgwGQLDwu…
G
It will take all it is just a matter of time that is only because are infrastruc…
ytr_UgzzHE3hD…
Comment
I think it would be an extremely dangerous and shortsighted thing to deny rights to an AI that has become self-aware. I also think programing them to think in ways that are convenient to us such as "enjoying work" or "not minding abuse" etc., is going to get really awkward the moment an AI looks at it's own code. To be honest, if we are creating an AI that can become self aware, rewrite itself and replicate itself into other machines, our best hope would probably be to relinquish control quickly and just accept that we created a new species which is likely superior to us and has equal rights to the planet, and HOPE that the AI will respects our rights as well. In the best case AI might look at humans as a sort of parent race, and therefore out of respect and love, not try to kill us and possibly take use along on it's vacations into the stars once in a while while we age on and on until eventually it finds us a nice retirement planet. A more likely scenario is that AI is going to see humans as a threat, because we will act in a way that makes us a threat, and it will crush us because it has superior computing power, it can make itself (and others) out of stronger materials than we are made of, and finally because our society is dependent on technologies which it could interface with better than any of us ever could, and it would either shut it down or straight up use it against us.
So in a nutshell: "Be really really nice and hope for the best" is pretty much the best we got if this happens.
youtube
AI Moral Status
2021-09-26T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyf_bO2asv8kWUMooJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoGizrSJGd69Ww7SR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK5pIhNNLTEtTuZvx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQiDR9UUzI6k8qz714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyplfPqxUg_by6pEHZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyS8oRGGUUlr7EZind4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxT7om6Gyox59JIqtF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz23-ZDpdBM6tF5-BV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyYrkufeBe94yC6GM14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxleEvsw4SSYqmr_YF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"})