Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a SF reader one of my favorits is Issac Asimov and his robots brains ( think …
ytc_UgxV2cqoj…
G
We are so screwed. We can't even get people on board with fighting climate chang…
ytc_Ugz4Crykg…
G
Feels like a scene out of a movie. I can already sense the chill going up the C…
ytc_Ugz4B3byg…
G
idk. Not worth blaming the technology when it's clearly the No Lifers. It's alwa…
ytc_UgxI07ivG…
G
Well Sam, welcome to the new (so called) advanced era of humanity. I'm a film c…
ytc_Ugzim4A08…
G
🫠 AI can build ad-hoc interfaces on the fly in coming future 😢 we need to learn…
ytc_Ugw4i3-rx…
G
Robot because well they have this blue thingy it looks like silver and then some…
ytc_Ugz4cm7ou…
G
I would love to have a fully self-driving RV. It would be great to go to sleep i…
ytc_UgyLaVrWn…
Comment
Two basic issues concern me with A. I. and robotics advances.
--Nobody's really addressing that a LOT of the statistical reasoning that deep-learning and/or machine-learning types of A. I. use, whether it's based on neural networks or procedural expert systems or not, whether it's tethered to human guidance or not, a lot of that statistical reasoning starts from Bayesian math of the sort that is way more hair-trigger and abrupt than the human version of affairs. In plain English, when you see a human reason on a statistical or probabilistic basis, a common statement you get is "It takes three points to make a trend" or "Let's see if a person meets three conditions of the criterion". With many statistical reasoning engines and especially with Bayesian mathematics, that gets dropped to TWO points for a trend, TWO conditions for a criterion. For example: here's a list of symptoms of clinical depression.
https://www.uptodate.com/contents/image?imageKey=PSYCH%2F106958
Now, under (A) of that list, you'll notice it says "two to four items" on that list plus every criterion A-F I think. Ask yourself: how much more hair-trigger would that listing (taken from the DSM 5, or the fifth edition of the Diagnostic Statistical Manual) be, if you only had to meet TWO of the criterion under (A) there and only consider (A) on that criterion and one other element, A-F? Under a Bayesian, not human take, of that system, a LOT more people would meet the criterion for depression sight unseen, just by making the statistical side of it more jumpy. Substitute "depression" for nearly any other statistical profile and you see where I'm going, right? A. I. is always going to be way more hair-trigger than human, even with a human being "holding the leash."
--And. . . well, you have to understand, the first and most likely second generations of A. I. and robotics are GOING to be informed by human values and human behavior. Really, why did the Google image recognition program recently call Michelle Obama a gorilla? Because it was coded by white male programmers training the program on mainly "white person type" imagery. That's a blunt way to put it, but it's the literal truth as well. Garbage in leads to garbage out, and in this case, having massively homogeneous programmers and/or data-sets can cause A. I. to become _more biased_ than the human counterpart, implicitly. This is why you don't want autonomous drones or robots doing "police work", mainly because you can bet they will be trained by human police officers at some point--the implicit biases will be transferred.
Never mind the biggest bias of them all: the one implicit in having the _Military Industrial Complex_ taking the lead on robotics and Artificial Intelligence. Really, what human being with a single solitary lick of sense gives a robot a GUN? Oh right, a military type of human being. I would trust almost anyone else on the planet before trusting any military force with training robots. Why? Because robots with guns equals robots shooting guns. It's that simple. A. I. isn't smart enough to preserve itself yet. If you give it a "gun" or a "laser" or a weapon counterpart in a virtual setting, it will quickly learn to use said gun or weapon on a VERY aggressive basis, again, way more so than what you'd expect of human aggression. A. I. has no bias when it comes to tools or their use--if it's given a gun, it'll use the gun as aggressively as it would use anything else. A gun, a water-sprinkler for watering grass on a lawn, they're the SAME to an A. I.
A human police officer would shoot ONE child of color at a time, then make excuses all day, slandering and dehumanizing the child after his death.
A robotic, A. I. controlled "Officer" would obliterate a whole neighborhood of people of color and then get decommissioned like you were changing a bad light bulb. More people dead and less, WAY less consequences paid for it, way less justice to be had for it. Is this what you want? Is this your future? It isn't mine.
youtube
2018-04-03T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwM5aZIxWW4j5iDuXx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxl09f6L7Rj-RKTPZF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzieqUhMRMtOB8uQh14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzlxnxQw99WAmfHehF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZoRdIrjkSS-bAyZ54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqukX4nqlg8PxxK0F4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjQP_1zTltW_9IU5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwH2zhdVVSb5TK2kxh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHwfpnimaOHxKVZVF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpUlJEuK97Bnz92U54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]