Showing posts with label self-driving. Show all posts
Showing posts with label self-driving. Show all posts

Tuesday, September 12, 2017

Read the Damn Manual, All 700+ Pages

As a bureaucrat who started his career editing ASCS manuals, I'm a bit more friendly to the idea of reading manuals than the average bear.  The things we use in our lives often come with manuals, manuals I don't routinely read.  Yes, when the clothes dryer goes out or doing something new with the microwave I may consult the manual, but I don't sit down to read them cover to cover.

The same rule applies for cars.  The manual's in the glove compartment, and I'll check it for problems.  But today I'm changing my rules.

The background: as I age my driving ability is declining.  I'm more easily distracted, more easily confused when driving in unfamiliar territory,  and less quick to react.  I miss pedestrians and approaching cars at intersections.  And the future looks worse, not better.  Like most people I'd hate to give up my control and freedom by abandoning the car and switching to public transportation, even the options in Reston are very good.

With safety options multiplying rapidly as we get closer to the self-driving car, what seems to make sense to me is switching to a short-term leased car.  That way I can get the advantage of the new features and still have the flexibility to upgrade to a newer car in a couple years, assuming I'm still competent as a driver when that day arrives.

So, I'm looking at a Prius with all the safety options.  But it's a big leap from 2006 to 2017, so I'm looking at the manual.  Indeed, for the first time I'm reading the Prius manual from the beginning.

But, the damn thing is 700 pages.  (As a measure of the changes, I think the manual for my current car is about 200 pages.)  700 pages.

Thursday, August 24, 2017

Lesson for the Week

"Always remember: driverless cars don’t have to be perfect. They just have to be better than cars driven by humans. As anyone who drives is aware, that’s sort of a low bar these days."

From Kevin Drum

Sunday, March 05, 2017

CrowdSourcing the Self-Driving Car

NYTimes had an article on the problems of creating the very detailed map needed by self-driving cars, which led to descriptions of the use of crowdsourcing to solve the problem.  

The idea is simple: have the equipment in each self-driving car update the imagery in the database that guides all self-driving cars.  To me it's a similar idea to my bottom-up car, or trainable car: the data from traversing a route at time A is available to be used to help traverse the same route at time B.




Wednesday, February 15, 2017

NBC Has the News Backwards

The headline on this piece is:  "Self-Driving Cars Will Create Organ Shortage — Can Science Meet Demand?"

That seems to me to be backwards--surely the most important thing about self-driving cars will the lives they save, not the lives they might cost because reduced accidents mean reduced deaths which means reductions in organs for transplant.  


Wednesday, January 25, 2017

The Last Mile Versus the Last 1 Percent

The old saw (Pareto) says 80 percent of the cases can be handled with 20 percent of the effort.  An extrapolation would be: self-driving cars can handle 80 percent of the driving very easily but it's the last 5 percent, especially the last 1 percent of the time which is difficult.  Which I find to be rather like the old "last mile" problem in cable: easy enough to move data across country in a flash, but getting it the last mile was difficult. 

Nissan has an answer, whether it's workable remains to be seen.  They're using a telecenter to handle the unexpected problems (like a emergency road crew patching potholes or something).

Monday, January 09, 2017

Driverless Car Showdown--Waymo and Mobileye

Mobileye is doing the learning approach, as described here. I've blogged before about the advantages of this approach.  But Alphabet (Google) has spun off its driverless car enterprise into Waymo, which announced this week it would have Chrysler minivans outfitted with its technology on the road by the end of the month.  Waymo isn't building its own cars anymore; instead it's providing a package of sensors, computers, and software to be added onto existing cars.  As well as I can tell Waymo is still taking the top-down approach, presumably taking advantage of Google Map data and expertise.

The competition between the two approaches will be interesting.

Saturday, July 23, 2016

The Trainable Car

Damn, sometimes I'm good!

A while back I blogged about the virtues of a bottom-up approach to an autonomous vehicle.

The other day I see this piece in Technology Review about a company working on such a car.
Oxbotica’s software gradually acquires data about the routes along which a vehicle is driven and learns how to react by analyzing the way its human driver acts. “When you buy your autonomous car and drive off the (lot), it will know nothing,” says Ingmar Posner, an associate professor at Oxford and another of Oxbotica’s cofounders. “But at some point it will decide that it knows where it is, that its perception system has been trained by the way you’ve been driving, and it can then offer autonomy.”

The result is a vehicle that can gain a deep understanding of the routes it drives regularly. That, Posner says, means that the software isn’t simply trying to do a mediocre job wherever it’s placed—instead, it does an excellent job where it’s learned to drive.
The objection, of course, is this works only repetitive drives over the same route(s).  My answer is that I'll bet most driving fits the 80/20 rule; 80 percent of time spent driving is done on a route you've driven many times before.  People are creatures of habit, mostly, and that means we can train our cars. 

Friday, May 27, 2016

Autonomous Vehicle: Top Down or Bottom Up? Trainable Cars

I've posted several times on "self-driving" cars, also known as autonomous vehicles, or driverless-cars.  If I understand, Google and perhaps some others are taking a top-down approach, which seems to involve extensive mapping of roads, signs, etc. etc., feeding the database to the car, and letting the car do its work.  That seems a little reminiscent of some old efforts to teach computers language by inputting vocabulary, grammar rules, etc.  Something similar also seems to have happened with robots.

It strikes me that a bottom-up approach might be more quickly usable, or call it a car with a memory. It's the same principle as teaching robots, learning by doing.

Assume a car with the ability to follow a route, avoiding other vehicles and humans, and with a memory, a trainable car.  Suppose I want my trainable car to take me to the grocery store and back.  I or another driver jumps in the car and drives it to the store, with the car storing the route and the environment of the route in its memory.  Perhaps we repeat the process several times, until the car is satisfied it knows the route.  Then I can get in the car, tell it to take me to the store, and it will do so (or tell me the conditions have changed so it can't).

You may ask: what use is that, I need a car for more than going to the store?  Good point, but my guess is that most driving is done on repetitive routes: that 80 percent of driving is done on 20 percent of routes.  My percentage is much higher than that.  So a trainable car could be rented for such repetitive routes (remember once one trainable car learns the route, the data can be copied to all others).  So Zipcar could train a car to drive to my house, and I could train it to drive to the store, etc.

There are many people who because of age, inebriation, disability, poverty, etc. do not and cannot drive.  I saw a couple women outside the grocery store the other day, waiting with their groceries for a cab to pick them up, too poor to be able to afford owning a car.  For these people a trainable car would be valuable.

For drivers the trainable car would also work, because the 80 percent of the routine routes, the commuting to work, etc. could be handled by the car and allow the "driver" to be on their cellphone, making the roads safer for everyone.

Lastly and perhaps most important, is the fact that data on roads and conditions is flowing up the organization, since a trainable car can transmit updates to the manufacturer which can then flow to the rest of the fleet.  I think that's important: in any structure getting data going up is as important and getting it going down.


What use would a car like that be? 

Saturday, April 23, 2016

Driverless Cars Revisited

Vox describes the detailed mapping Google and others will have to do to support their autonomous cars. And Brad Plumer describes five challenges: mapping; social interactions between car and other people; bad weather; regulations; cybersecurity.

These innovations seem to be coming from different directions: cars which drive themselves on preregistered courses (there was a piece on an outfit in the Netherlands which produces upscale gold carts for such applications); cars with improvements, like today's safety stuff; cars which are as independent of outside help as the old model of human driving car (the Tesla model).  It's partly the old question, which is better distributed intelligence or central guidance.  We shall see.

Tuesday, December 29, 2015

The Problem of Consciousness in Self Driving Cars

Technology Review has an article on why self-driving cars must be programmed to kill, which is one of their best of 2015, which attracted a whole lot of comments.    The starting point is the old philosophical dilemma: in a choice between killing one and killing many, which is the right choice?  Do you push the fat man onto the railroad tracks to derail a train bearing down on a stopped school bus, or whatever? Does a self-driving car go off the road and over the cliff to avoid killing people in the road, if it kills the driver?

It strikes me as a problem only for the self-driving car which is conscious.  What do I mean? A computer processes one bit of information at a time, it's sequential.  The philosophical dilemma is one of consciousness: because humans are conscious we know, or think we know, things simultaneously: both the fat man and the school bus and the possible different courses of action.

But how would a computer know those things?  Say its driving a car which rounds the curve on the mountain road.  Maybe it knows there's no shoulder on the side, just guard rails which it will try to avoid. At some point it starts to see something in the road. It starts braking immediately.  It doesn't take the time to distinguish between live people and dead rocks, it just does its best to stop, perhaps being willing to hit the guard rail a glancing blow.   Presumably its best is a hell of a lot better than a human's: its perception is sharper, its decision making quicker, its initial speed perhaps slower.  I suspect the end result will be better than either of the alternatives posed in the philosophy class.  

The self-driving car is going to be optimized for its capacities, which don't include consciousness.


Saturday, October 24, 2015

The Importance of Knowing What You Don't Know

One of the few lessons I learned at work is the importance of knowing what you don't know.  I remember assuring the state specialist for Arkansas of an answer, which I wasn't really sure of.  Naturally I was wrong, and the answer turned up in an OIG report.

Seems to me the same issue is cropping in with self-driving cars, as witness this Technology Review article on problems with the new Tesla software/hardware.  Apparently Google is trying to handle all situations, but the problem drivers are having with the Tesla is not knowing when the system is approaching the limit of its capability, i.e., not knowing what the Tesla doesn't know or isn't sure of.

Friday, June 12, 2015

The Elderly and Self-Driving Cars

Vox has a piece on the problems of the elderly who must drive up driving. As someone who's more rapidly nearing that time than I'd like, I like it all, especially as I endorsed self-driving cars (see the label) though there's more to the piece than just that.

And here's a Technology Review discussion of such cars.

There is one problem I can see with such cars.  Since we know that a human is driving the other car we see on the road, we can assume the car will behave in certain ways. It's likely early on that self-driving cars won't.  An example: a cardboard box falls off a truck--from the way it falls and bounces a human will assume it's empty.  A self-driving car may have to assume it's full, and to be avoided, possibly by an emergency stop, which the human driving the car behind that car  won't anticipate.   But such problems aren't show-stoppers.

Wednesday, May 14, 2014

Priorities for a Self-Driving Car

Sounds like Google has its priorities right--they demoed their self-driving cars again.
"Acknowledging that freeway driving is a positive step toward safer driving, Christopher Urmson, a former Carnegie Mellon University computer scientist who now heads the project, was clear in saying it would not have impact equivalent to a robot car that could safely move the elderly from one location to another."

No accidents in 700,000 miles of driving sounds good to me.  I used to think of myself as a good, slightly above average and somewhat more cautious driver than the average but I've had three accidents in my life. Haven't driven near 700,000 miles, maybe 250,000?

Tuesday, January 22, 2013

We Once Had Self-Driving Transport

This is inspired by a post at Freakonomics, which discussed trains.

In my case, I'm referring to horse and buggy.  It's true horses don't require nearly the amount of close attention that cars do.  My mother would remember driving into Binghamton with a load of cabbage and potatoes, spending the day, and allowing the team to find their way home that night.

I'm enthusiastic about the idea of Google (and others) self-driving cars--especially important with my declining abilities as I age, but I'm not ready to go back to horses.

Friday, August 05, 2011