Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai will rend human minds in vegetative state. Crime rates will be much more wors…
ytc_UgxMN2KTu…
G
Next step, once everyone is reliant on AI, make it super expensive and lock it's…
ytc_UgxAYO5tZ…
G
XD it's cuz you not use the custom form XD and set the genre % around 100% XD Yo…
ytc_Ugw4PLRMZ…
G
What's going to happen is AI will take over. Then there will be no humans to car…
ytc_Ugxw5RieN…
G
There needs to be a better solution to life for humans. If AI and robots can do …
ytc_Ugw01z_mx…
G
you are not teaching, just using!!!!! AI needs teaching as a small child...think…
ytc_UgwyjQasH…
G
I want to hear the robot debate but the stupid person keep on talking and talkin…
ytc_UgwJP4wKL…
G
Think ai will be smart inuf to not build its own cage, to not piss in it's own …
ytc_UgyQmyQYP…
Comment
This woman's life was unnecessarily lost. It's the Uber system at fault. Algorithms and apparently, sensors or lack thereof. I got to about 3:30 - you mean, this car does NOT have infra-red beams spitting out from several points already?!! I bought a cheap light from the hardware store for a tenner 25 years ago, it had a PIR and it took 15 years to break down. My instant reaction is - what a primitive system.
As a systems person, I will offer what insights I can, not that Uber would listen to me, or indeed should have heeded such obvious warnings, long ago. Let's bullet-point them in a moment, first an example of real-life driving: I failed my first motorcycle test by not performing an emergency stop sufficiently well.
I had been riding smaller capacity machines on the road for a while, and already had several real emergency stops under my belt, as various drivers sought to make me slow down and ride more safely by pulling out at junctions, and slowly meandering across to their chosen lane. No, really, that's what they did.
Ergo, I learned quite precisely how hard I could 'slam-on' and at what point the machine would be out of control for various reasons. Two main things can happen: First, the front or rear wheel skids; Second, the rear wheel rises off the ground. Do it badly enough, you could somersault.
So, I was used to braking in a stunt-driver fashion, to save my neck, and that of my pillion rider. Come the motorcycle test, I did my usual emergency stop - hence the fail, because the rear wheel rose up and the fact I stopped in about a single car length, was not of interest to the examiner. I had apparently 'lost control' of my vehicle.
Had I allowed myself to be conditioned into sedately halting the vehicle, I wouldn't have been there for the test, I would have been dead already. The examiner, stepped out from between two parked cars, about 6 vehicles up. I had stopped, by 5 vehicles away. He was still way, way, up ahead. Seemed silly to be so far away at the time, to me.
On the Uber faults, apart from sensors missing, here's my main 2c worth:
ROTTEN ALGORITHMS. Unable to distinguish between the value of a human life and the relative merits of suddenly swerving at or past, the point of overbalancing and/or skidding, to force a collision with oncoming vehicles, in preference to hitting the lady with the bicycle.
Everyone reading this, or viewing this channel, has probably been in a car, or been driving, when sudden evasive action is taken, quite subconsciously, to avoid a pair of small love-birds swooping in front of the grille, or a cat that chose the wrong moment to cross the highway.
It's actually hard NOT to do it? I have to steel myself to remain in the present lane, and only brake, should an event like this happen on a busy highway. If it's a human life, most people will be very close to, or actually will, steer into oncoming traffic, out of reflex, to avoid the moving living thing in their path.
OVERLY-SPECIFIED SAFETY MARGINS - the Uber system is literally, too safe. It errs on the side of caution to the nth degree, far too much. You could well say, how do I know this? Uber, autonomous driving in America, with the lawsuits possible, would never program their vehicle algorithms, to deliberately drive into an opposing flow of traffic.
And that is WRONG. Sometimes, stunt-driving is basically what evasive manoeuvres amount to. Sometimes, you just smash a whole front suspension unit off, to avoid an idiot who then drives off after recklessly joining traffic, leaving you looking stupid. A friend did this with a large SUV (Quattro) in England. He had no choice - collide with the moron, or drive directly into the concrete wall and 12 inch kerb (curb) at the side of the underpass.
That, is what the Uber system does not get.
My comments might be badly-received, I dunno. I know, that driving on or beyond the safe limits, is required in order to come back home sometimes, with no humans harmed. If autonomous vehicles have not got the 'cahones' to do this type of driving, they are not up to being part of our vehicle flow. Stay on the test track.
My last point - I bet they tested these systems, with 99.9% of mileage being uneventful (as a human would see it).
What they should be doing, is teaching the thing to stunt-drive around totally stupid and random events. Have a bunch of footballs or rabbits, run across the track. Simulate choices between living organic creatures being avoided, and inanimate objects. Keep doing it.
Here's my conclusion: I have taken more strenuous evasive action to avoid a moving SNAIL crossing the highway, than this Uber system took, to be aware of, and avoid this human being, in preference to massive vehicular damage and a large lawsuit.
I hope they get sued blind by a relative of the deceased. It's Uber's antiquated system at fault. They took someone else's primitive algorithms, and did not perform the kind of off-highway testing, that was necessary to make it right, because they were in a race to make a marketing claim that they are 'live' with autonomous driving.
Uber's fault, all the way.
IMHO of course.
youtube
2018-03-21T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzQH8950xJpDidBy2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmGBjfbercVa6rHnN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxVFeRhE0pcVZi729d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNouL6pwS3l3QANrx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxG5fncB0KAoexq6T14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz89rRW9rxcJSc_hjd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxa-vo4_D89zZL43YR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxFlEcgAs8-84gHfRN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwEF4BykDlDvx-hjdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_UJDCSX7kcwfjjhh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]