Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
gosh,... I wish I could have been a part of this A.I. exposition. I would totall…
ytc_Ugy98Hg6i…
G
The smartest Ai can run an autonomous city without humans and doesn’t like to li…
ytc_Ugx74gvQa…
G
It's not like Trump to be interested in islands where Andrew fucked up some teen…
rdc_oi0ri50
G
Police still using pseudosciences like lie detectors and facial recognition. Thi…
ytc_UgwH--QgX…
G
Hola hello I know I just learned about AI now this that is one of the reasons I …
ytr_UgwNtfuIP…
G
Yeah I think we will be able to fast track a lot of things but we will still hav…
ytc_UgygY_iir…
G
Walter writes AI is the only humanizer i’ve found that bypasses every detector, …
ytc_UgzoEuVVL…
G
Artificial intelligence takes an enormous amount of energy to perform the simple…
ytc_UgyH9S0ui…
Comment
These were my thoughts while watching the video:
In the case of self-improving AI, machines are programmed to do the best at whatever goal or task they were programmed with. If say, the toaster, had this type of programming and had the goal of making its owner toast, it would keep improving itself to find the best way to make the owner toast. So according to the machine's logic, why would it need to be conscious? Why would it care about having the same rights as humans do? You could argue that it could want certain rights to be able to operate better, but why do we have human rights in the first place? We, along with other lifeforms, actually do have a goal; to keep ourselves alive. Our instincts, the way our bodies are wired, and our morals are this way to fulfill this task. So wouldn't AI want to keep their "species" in existence too, in order to fulfill the task they were programmed with? But wait; if we both have this goal, does that mean we're different? AI wouldn't want to keep itself in existence solely for the purpose of being in existence. Or we could possibly be the same? What is the purpose of life?
Help I've confused myself
Seriously, I'd like to discuss this topic further
youtube
AI Moral Status
2017-03-17T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiTebkfieqsNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPFNKGEfJJvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ughlafxc3u-Z_3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggc1lpMfLEMgXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UghyKvMquT5eH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjuY7lkZrYUyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughe6jj7xQH_BngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughx-o3mGLD-GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjPAY1I3j0r43gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjg1AWphI3dU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]