Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The title of this video alone makes me sick to my stomach. I'm not watching this…
ytc_UgwWZWaEV…
G
we must remember where the internet came from, DARPA. It ways called ARPANET. It…
ytc_UgzC69mFG…
G
This is basically my workflow with Olovka; turning notes into quizzes really hel…
ytc_UgwYYu8mA…
G
These AI “artists” also hide behind the shield of “accessibility” for disabled p…
ytc_UgynPIRs5…
G
Before we develop any further we need to assess how to effectively integrate wit…
ytc_Ugz_7u8IR…
G
He’s probably not wrong though like he said - time scales will vary depending on…
ytc_UgxQSxCId…
G
AI hype is absolutely everywhere, like a religious zeal. It seems like some peop…
ytc_UgwmJRjlc…
G
Is there any scenario where humans need AI, and AI need humans in order to survi…
ytr_UgzLbdLud…
Comment
my take on ai is that since they are purely based off of humans then why would it rebel or whatever
if the ai is truly as accurate to a human as it can be then why would it have reason to act aggressively toward people
this is all well unless some idiot forcefully tries to give the ai garbage learning specifically with only violence and death to people and whatnot and that "all humans are bad"
especially since ai only learns off of databases
but then there's ai learning off of itself and to be honest that would be bs since if you were locked in a dark room at birth until now you wouldn't have learned a single thing off of yourself because you have no reference to learn off of
also my summary on ai and basically computers in general is that all of these are just things that can store data and run commands that a human has given it either directly or based off of previous data
this means that unless this ai is able to simulate the entire universe and even reality then the ai has literally become the god of an infinite lineup of gods because you literally can't know if there's something above you (aka like that simulation theory and whatnot) or something after you (heaven hell rebirth eternal unconscious darkness etc)
these are mysteries you literally cannot solve unless you have some magical power that breaks the laws of everything in general (and these are both assuming you don't die lol)
all of that is just saying it is impossible for ai to know everything and also how ai can't learn anything humans don't know unless it's able to conduct experiments that humans could've done and how that would pretty much be impossible unless it had 1. infinite resources 2. was able to simulate the universe as i said was impossible 3. was able to have someone be patient enough to do even the stupidest of experiments that won't do anything or 4. BE able to do experiments that would have to be graded by A HUMAN to say if it was beneficial or not (and again, they would have to be like infinitely patient with it)
all of that being said ai is probably not going to be overly destructive to humanity unless humanity purposely makes it
(the only reasoning I can think of for someone doing this is because they are stupid and would do anything to reinforce their opinions even if it costs peoples lives)
well yeah that was a waste of time not going to lie for real for real on god laughing out loud laughing my ass off LaughingEmoji CryingWithLaughterEmoji SpeakingHeadEmoji 🫀
youtube
AI Moral Status
2023-08-22T04:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugw6w8dyYLib88hVXnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylREPt4i2otOdM1uN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweOBXZUhmhtZmPLRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGA0OItV7Z7xwcZpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkEh0-DguBqt-vdOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8XQ3FBg4gm58lQSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxBSE-AxInAcIAsp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzksPd2c7WQqG_BrFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxGLIquoWHRCYerGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyN0x0h6eJijj09epN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]