Skip to main content


Techbros: self driving cars are inevitable!

Also techbros: prove you are human by performing a task that computers can’t do, like identifying traffic lights.

in reply to Sarah Brown

There will be AI that correctly identifies traffic lights and ignores them by choice; like some humans do. ^^
in reply to Sarah Brown

You just described my trepidations over this forever now! This is just pure gold
in reply to Robert Kingett, blind

@weirdwriter I already find myself, as someone with a reputation for knowing about computers, being told more personal and financial details than I would like to help out friends who are blocked by the complexity of the process from, for example making charitable donations, or sponsoring someone for a good cause.
in reply to Sarah Brown

and always make your self driving cars obey the speed limits, easy to do… make them drive like old old people!
in reply to Sarah Brown

Very funny. Truly!

With that said, I think the point is that self driving cars will be here someday, but they are not here yet.
Or maybe they are? Supposedly, AI is better at captchas than humans now.

in reply to Sarah Brown

if self driving cars are inevitable, shouldn't AI be able to identify traffic lights? seems counterproductive.
in reply to Sarah Brown

I saw in a news article musk had to intervene on a live video when his twitter car was going to go through a red light 😂
in reply to Ben Todd

@monkeyben yep, he was first in line at the intersection, the green arrow came on for opposing traffic to turn left and the car hit the gas
in reply to Sarah Brown

to be fair to Google re-captcha, those traffic lights contain special grainy noise patterns that throw off computer algorithms...
in reply to Sarah Brown

same people making my thumbs do a massive amount of work for this experience.
in reply to Sarah Brown

Why do you think the tech bros want us to identify traffic lights, bicycles etc. Its so they can better train their "AI" models to spot them so the self driving cars become a reality. We are all beta testers
in reply to RobCornelius

@RobCornelius I’m fully aware. I’m also aware that they are STILL doing both, simultaneously. Only one can be true. Which is it?

(Hint, it’s “computers can’t identify traffic lights”. Self driving cars are, in fact, randomised murderbots)

in reply to Sarah Brown

@RobCornelius notice, BTW, it’s always American street furniture, a country well known for being a huge international outlier in terms of not using internationally standardised vernacular for street furniture.
in reply to Sarah Brown

@ajlanes Man, these aren't even that hard to follow, why isn't Canada a signatory for this either :I Probably just playing follow the leader with the US on that one, bah
in reply to Sarah Brown

Perhaps it is because the self-driving AI's already know what objects are unambiguously traffic lights but still require human input to answer the philosophical questions of whether the iconic depiction of a traffic light painted on a sign counts as a traffic light
This entry was edited (10 months ago)
in reply to Bornach

@Bornach @RobCornelius Except that's nothing like the widely internationally ratified sign for a traffic light, as used in most of the world (completely the wrong shape for a start!)

https://en.wikipedia.org/wiki/Vienna_Convention_on_Road_Signs_and_Signals

in reply to Sarah Brown

@robcornelius

There's also the issue of context. Those are indeed physical traffic lights but the AI should realise why it shouldn't obey them
https://futurism.com/the-byte/tesla-autopilot-bamboozled-truck-traffic-lights

in reply to Sarah Brown

@robcornelius
In the context of the trolley problem, they literally must be.

(As are BTW humans, generally speaking, when an accident happens, humans do not have the time to take a well reasoned decision, it generally is more or less a random decision if at all.)

Taking any stand on the trolley problem, and related ones, immediately raises questions of liability. So random() is the "safe" out for moral cowards.

in reply to Andreas K

@robcornelius
But coming back to the more general problem, and taking the point of view of a budding data scientist.

It's not a question if algorithm-driven cars cause accidents. (the AI buzzword gives me migraine)

Humans DO cause accidents too.

So the question is: Do computer-driven cars cause more or less accidents than humans.

It's hard to assess this, at the moment, as there are literally only a tiny number of really self-driving cars.

in reply to Andreas K

@robcornelius
And the involved entities are commercial, so they tend to throw the veil of trade secrets over most of their data.
in reply to Andreas K

@yacc143 @robcornelius There are roughly 70 companies researching driving automation here in the Bay Area that provide data to the state. Safety of these vehicles peaked around 3 years ago with the best performing cars having 5 times as many accidents per mile as the average driver. They appear to have reached the asymptote of the improvement curve.

Pippin reshared this.

in reply to Marty Fouts

@Marty Fouts @Andreas K @RobCornelius why, it’s almost like there’s a whole pile of bullshit surrounding the whole endeavour. So unlike the tech industry, that.
in reply to Sarah Brown

@yacc143 @robcornelius @MartyFouts In really advice to be watchful for companies trying to solve the problem by making cities more self-driving-car friendly. By making streets even more the insta-death zone in front of your door.
in reply to Marty Fouts

@MartyFouts
That begs the question how come they got the permission to test on public roads and endanger the public of that's their best case?
@goatsarah @robcornelius
in reply to Andreas K

@yacc143 @robcornelius The politics at the state level are fascinating or would be if lives weren’t at stake. Google started using public roads without permission as Tesla still does. Cal DMV stepped in and designed a program requiring safety drivers in the test vehicles but Google’s Waymo spin-off got another state agency involved and so they and Cruise have licenses to run driverless taxi service in San Francisco. Or as someone else pointed out: money
in reply to Marty Fouts

@MartyFouts
Don't take it wrongly, but the German car makers literally spent years in R&D and ended up offering way less (co pilot system for limited use cases eg high ways/autobahns), literally citing this as the safe state of the art. They could offer more of they were willing to associate their brands with unsafe cars.

@goatsarah @robcornelius

in reply to Andreas K

@Andreas K @RobCornelius @Marty Fouts My car has a similar autopilot system (lane following and distance maintenance). It works very well, but you HAVE to be aware of its limitations. You CANNOT remove the human from the system
in reply to Marty Fouts

@MartyFouts @yacc143 @robcornelius This article jibes with my outsider (out of the car, not out of the area) impression. Human drivers have gotten a lot worse since covid cleared the streets in 2020 from what I see.
https://arstechnica.com/cars/2023/09/are-self-driving-cars-already-safer-than-human-drivers/
in reply to Marty Fouts

@MartyFouts @yacc143 @robcornelius
What statistics? That human drivers have gotten a lot more reckless since three years ago or something else? Where can someone else see these stats?
in reply to Eli the Bearded

@elithebearded @yacc143 @robcornelius The human driver statistics are published by the Insurance Institute of America. The automation statistics are reported to the California DMV. They were published until a few years ago but now you have to ask the DMV each year for the data. Human drivers have not gotten a lot worse but automation has stopped getting better and was never as good as average drivers
in reply to Marty Fouts

@MartyFouts @elithebearded @robcornelius

Funny Swiss RE just published a self-driving cars are safer than human-driven cars, purely based on insurance data.

https://arxiv.org/pdf/2309.01206.pdf

in reply to Andreas K

@yacc143 @elithebearded @robcornelius There are not enough automated cars on the roads to have sufficient data for such a comparison unless they have a definition of “self driving” that is mainly based on driving assistance automation rather than driver replacement systems.
in reply to Marty Fouts

@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius and if they’re looking at driving assistance systems, then I will note that they routinely try to kill you. It’s just that the driver interrupts them in the act (source: have one)
in reply to Sarah Brown

@elithebearded @yacc143 @robcornelius There are active and passive assistance systems. Passive systems like backup cameras and blind spot warning are safer than no assistance. I don’t know specifics of active systems but they seem to vary widely in quality. But certainly they can all cause problems especially lane following and automatic emergency brakes. I have had a Subaru attempt to counter my steering on ice and nearly crashed as a result. Traction control makes me cringe.
in reply to Marty Fouts

@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius for me the benefit of active systems is that they are immensely valuable in keeping you fresh on a long journey by reducing cognitive load.

This more than compensates for the occasional blip where they try to kill you at 120kph and you have to intervene to stop that.

But they absolutely still do it. Not often, but dead is dead, right?

in reply to Marty Fouts

@yacc143 @elithebearded @robcornelius I’ve had a chance to read the article, which I should have done earlier. The flaw is that they are only looking at Waymo One 3rd party insurance data; but Waymo is self insured and so does not report all of their incidents this way. You have to look at data reported to the DMV for a more comprehensive analysis. Also, Waymo One only represents a fraction of Waymo’s data.
in reply to Eli the Bearded

@elithebearded @MartyFouts @robcornelius
COVID/2020 has made statistics/relative references suspicious. (Actually that applies to subjective perception too)

It was such an extreme outlier, that 2021 might have some number at 75% of the 2019 value, and be still perceived as a huge relative raise over 2020.

in reply to Andreas K

@yacc143 @elithebearded @robcornelius That may be but it is not relevant here. Covid data, as you say, represents an outlier for various reasons. The driving automation data on the other hand is produced under controlled circumstances and reflects the best that the cars can do. It is part of why Waymo has publicly stated that it is an intractable problem.
in reply to Andreas K

@yacc143 @MartyFouts @robcornelius My observations walking around a lot are speeding/ignoring traffic signals (lights and stop signs)/ terrible choices about u-turns etc, got bad in 2020 _and have not improved_. Which has me question comparisons between pre-covid and now. I am just one person walking around one city, so very much ancedote and not conclusive data, but I don't see self-driving cars doing those dangerous driving things. I see them block traffic but that's about it.
in reply to Andreas K

@yacc143 @robcornelius On a really good day someone has been trained for the emergency, so what you get is a "trained" response rather than a "random" response.
This entry was edited (9 months ago)
Unknown parent

Sarah Brown

@'ingie I mean that they're all trained on American stuff.

5% of the world population.

So when they unleash the result on the other 95% of us, it's going to have no fucking idea what it's doing, because it doesn't look a damn thing like the training data.

in reply to Sarah Brown

@toplesstopics
M Knight Shyamalan twist:

We are all trapped in a simulation inside a self driving Ford Windstar 200 years in the future and the traffic captchas help them drive.

in reply to Diabetic Heihachi

@DavBot or the tech bros remote control the car to drive you to the hit man location of their choice, as happens in many not-so-sci-fi movies I could name 😩
in reply to Sarah Brown

They still ask about bikes too. It's been years. If they still can't tell the difference between them and say a lamppost, it's not good.
in reply to Sarah Brown

@Wgere when it comes to techbros, humans full stop are generally considered expendable depending on publicity levels.
in reply to Sarah Brown

if they were able to take a step back they may feel the pain.
in reply to Sarah Brown

Just last week I heard on a podcast that captchas became obsolete since AI can now read those better than humans.
in reply to Martin Senk

@MartinSenk
If you successfulky solve the captcha, you are considered a bot and only get the SEO version of the website, that is meant for indexing by search engines. To get the actual hidden amazing content you have to choose wrong answers only!
in reply to Sarah Brown

The slowly dawning realisation that somewhere in San Francisco a "self driving" car is sat patiently waiting for you to complete a CAPTCHA :tiredcat2:
in reply to Sarah Brown

the irony is that ML models are now better at solving captchas than humans, making captchas entirely pointless.

https://arxiv.org/abs/2307.12108

This entry was edited (10 months ago)
in reply to Ariadne Conill 🐰:therian:

@ariadne Please let captchas die. Since my browser doesn't keep cookies I'm constantly solving them. I often refresh until I get the easiest one possible.
in reply to Ariadne Conill 🐰:therian:

@ariadne Such weaknesses were pointed out at least as far back as 2009.

To quote Jonathan Wilkins' https://web.archive.org/web/20110723025202/http://bitland.net/captcha.pdf

"For instance, with a 10,000 machine botnet (which would be considered relatively small these days),
given broadband connections and multi-threaded attack code, even with only 10 threads per machine, a
0.01% success rate would yield 10 successes every second, which would provide the attacker with 864,000
new accounts per day if they were attacking a registration interface."

@goatsarah

in reply to Sarah Brown

though there was recent paper showing AI is faster and better than humans in solving CAPTCHA, so I am not sure what's the point of it anymore
in reply to Sarah Brown

Worst part is the computer then considering it incorrect when the human says that a sign with an image of a traffic light is not actually a traffic light.

If that AI is going to be driving cars, I'll pass.

in reply to Sarah Brown

at intersections automated vehicles continue in between crossing traffic without stopping, all vehicles being aware of the others and keeping necessary distance. No lights needed.
in reply to Sarah Brown

I know this is a funny, but given that your responses suggest you really believe it I should point out that the action of identifying them traffic light (or bridge, or motorcycle, or boat, or whatever) is not in itself sufficient to identify you to the recaptcha as human, it's the way in which you select the items, the movement of the cursor, the time taken to read the page etc.
in reply to Sarah Brown

Nice joke!! I actually learned recently why captcha or recaptcha got so simple, it's because it's not the result that matters 😂 It's the mouse path 🐁
in reply to Sarah Brown

Why do you think "techbros" are a homogenous amorphous mass with a single will? Do you even have a working definition of what a "techbro" is? This sounds like classical prejudice to me - invent a group, then ascribe all opinions by one member to the group as a whole.
in reply to Stephan Schulz

@Stephan Schulz I wrote the software that built the chip that powers your phone. Sit down.
in reply to Sarah Brown

Trying to pull seniority on someone you don't know is not really a power move. Nor does it advance the discussion.
in reply to Stephan Schulz

@Stephan Schulz "Nor does it advance the discussion", he said, nasally.

Nice fedora. Did your mum get it for you?

in reply to Sarah Brown

Whenever I encounter a captcha like that I imagine there’s somewhere just in this very moment a poor little self driving car needing my assistance.
in reply to Sarah Brown

With enough people solving captchas constantly we'll be able to use them to help cars solve ethical problems on the fly like should I swerve to avoid the small child and hit the old lady