You are viewing a single comment's thread from:

RE: Autonomous (Self-driving) Vehicles...WHY!? 😖😔

in #news7 years ago

it's great to have innovations, but they'll never replace the irreplaceable human brain.

That the brain is "irreplaceable" is an assumption. One that many leading in fields like autonomous vehicles and artificial intelligence will vehemently argue against.

The biggest incentive of automation technology is to minimize the "error range" and frequency of error that we see within any type of skill that can be accomplished by humans and machines alike and we are achieving that for the most part in a broad range of these types of technologies, which can, and has been, substantiated by sound statistical analysis procedures that rule out chance or bias.

Whether or not autonomous vehicles currently fall within that category, I'm not sure, but I'm almost certain that it will, if it hasn't already, as we improve and fine-tune the technology involved in minimizing the severity and frequency of "accidents".

Sort:  

Uhh...somebody was just killed from a malfunctioning AV. I'm not sure if initiatives in "minimizing the severity and frequency" of "accidents" -- and I'm confused why you put quotation marks around accidents -- would be any comfort to the grieving family. Clearly, this is an unacceptable error range, and AV technology has a LONG ways to go before it can even come close to replacing the human brain.

I put quotes because not everyone will agree what the definition of accident is, especially when it comes to rating the safety of AVs. It would probably be more accurate to say "error", because it should be of concern any time that the vehicle behaves in ways that are unintended (such as crossing a lane border without the intention of changing lanes) since these are obvious signs that the potential for errors that may lead to fatality are there. You could make the argument to dock the same number of points any time that an AV misbehaves, because it only comes down to chance whether a human or car happens to be there when these instances occur.

As with anything in this sandbox of this objective world that we all share, nothing will ever be perfect. There will never be a time in the future when we can say that cars can be driven, or drive themselves, with 0% chance for error or accidents. It's unrealistic to accept nothing short of a perfect track record. What is realistic is shooting for a goal of some factor of 1 better than has been achieved by humans over a large span of time (random sampling with a large "n" and all that), like, say, 5X less likely to be in an accident.

Of course, that number would have to be very high for most people to accept that the writing is on the wall, and for them to even start to consider handing over control to AI, and we can only guess as to what that number might be, but my guess is that it's no lower than 10X. I also feel that the numbers will actually get much, much better than that, as in factors of ten higher (100X, 1000X, etc.), as this tech makes major advancements in the future.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.030
BTC 68183.09
ETH 2641.06
USDT 1.00
SBD 2.70