Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Blaming ChatGPT when you were paying the internet and Wifi bills to allow him to…
ytc_UgwENWZwh…
G
Ai is astounding. For someone like me, it let's me code a personal project of mi…
ytc_UgwIAVmgG…
G
Could this coronavirus mutate every year and come back like the flu? (I hv no ba…
rdc_g9th1y6
G
As someone who is still learning about programming, this really made me question…
ytc_Ugy-AuNcf…
G
I'm an AI artist--don't hate me. I have some simple questions based on your vide…
ytc_UgzUBWhka…
G
Instead of training robots about humans, AI robots should be used to find Aliens…
ytc_UgyrKBbXN…
G
Not in the US. Tech billionaires own the government. They bought the election t…
ytr_Ugxv9hagX…
G
What a dystopian load of garbage!!! An institute associated heavily with governm…
ytc_UgxcxLyXV…
Comment
True AI would just be a program with the ability so select any random purpose for existing. Learning and acting on that purpose and changing that purpose for any random reason it chooses. That would make it extremely dangerous. It would be like allowing thousands of random chemicals to just mix together and hope something good happens instead of something bad. The word AI is often used for any program that can learn but that's not true AI. You can call it adaptive or smart perhaps but it's not AI. If a program can change it's own coding it must be limited to some degree to achieve a desired result. Just because a program can collect data, analyze that data and write new algorithms that causes better outcomes for a given purpose doesn't mean it's AI. It might be a super amazing program but it's not true AI. So as the speaker said, we still need to define the terms because I don't think we all agree on them. Some people may think if you only limit a program in ways that humans are limited then you could call it AI if it passes some test. I don't agree with that at all. I may have physical limits but I really don't know of any limits I have on my thoughts. Perhaps my survival instinct limits my thoughts and actions to some degree. Perhaps my fear of consequences limits my choice of purpose and actions. But life can exist without those limits. They may be like lemmings walking off a cliff but if they were living cells we'd still call it life. Just not intelligent life. If we made our AI fearless then that would again be very dangerous. So I think we should expand the meaning of AI to different types such as True AI (TAI) which should be illegal everywhere, Limited AI (LAI) and very limited AI could just be called Adaptive Programing (AP) or something. If all an "AI" program can do is something simple like alter your car's windshield wiping pattern based on rain analysis and previous wiping results then that's not true AI in my opinion. It's a cool smart program but it's not AI if you ask me. Anyways, just my thoughts.
youtube
AI Moral Status
2022-07-08T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyufjCXtkE17ab4Oah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDdzmcOBt42v1Uhft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7B9AoWfrLlFOTy094AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyyQ3pOVPeZR1QJkwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCTjKJf8-FLT9lb-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]