General Self Driving Car Blog 29 August 2019
Having an interest in
picture recognition and artificial intelligence I was some what amazed to
what happened when a person driving a Tesla car enter a line of cones. The cones forced the
car from the left lane into the right lane then forced the car back into the
left lane. I was watching a video of a person driving a Tesla car in
self drive mode.
What estonished me was that the driver said that they
were impressed when the car avoided the cones on the left of the car but had
to take over the steering because it could not avoid the cones on the right
side of the car that was forcing the car back into the left hand lane.
What caught my attention from a person who's been involved in picture
recogntion most of his life was that the car could obviously identify the
cones on the left hand side of the vehilce but could not understand that the
cones on the right hand side of the vehicle was forcing the car back into
the left hand lane, this being on UK roads.
So was this a vision issue
well we must assume that the answer is no if it could identify the cones on
the left side and moved to avoid them. We then must ask was this an
artificial intelligence issue. Again we would think no because if it
could see the cones on the left and side and then avoid them surely it can
see the cones on the right hand side and then avoid them.
whatever the issue is it did not avoid the cones. It could be
something like the artificial intelligent system had not learned about cones
on the right side of the car, but this does not seem feasible. Now from an
outsiders point of view I am now starting to feel very concerned. If it did
not avoid the cones because it did not recognise them then this is bad, if
it did not avoid the cones because there was a intelligent system problem
again this is bad.
One of the biggest problems with any machine
learning system using picture recogntion or any other thing like radar is
that if the vehicle does not see the object then it ignores it and with cars
the big problem is that it carries on at the same speed. If a person
crosses a road and your vehicle is doing 30 miles per hour even if you
hit them your speed will have reduce down because you will instictively
break. So hitting someone at 30 miles per hour or with breaking 10
miles an hour can be the difference between serious injury or a couple of
But put simply there is a major issue going on
here that really needs to be addressed. What I really do not
understand is that these cars not only use picture recogntion but they use
radar. My open question is why should you altogether trust any system
especially systems that will always have fundamental flaws. Picture
recognition is not 100% accurate far from it. But my question is why
are companies like Teslar or any other car company not implementing some
initial fixed rules.
This below is just an example.
For example if there is something
less than 10 feet directly in front of my car then break to stop within 9
feet of that object. It should not matter what that object is if its
bigger than a football then break and stop before hitting the car. Any
sensor that can judge distance can trigger this rule.
If object in front is closing on your car faster than you car is going then
reduce your cars speed to a level that keep a safe distance between you and
that vehicle. Any sensor that can judge distance can trigger this rule
If any object on the left hand side of the car comes
within 2 feet of the left side of the car move more to the right and slow
down. This rule should be implemented if the artficical intelligent systems
steering failes for whatever reason.
The same as rule 3
but for the right hand side of the car.
If the left
and right side start to close in more than 2 feet stop the car.
If rule 1,2,3,4,5 do not come into play then use the
artificial intelligent system to make all ofther less important decisions
using neural networks and vision system to provide speed and lane control
and when no lane lines spacial control. Any sensor mix that the
artificial intelligent system can use to control the vehicle.
to 5 are last resort rules the driver will still take control if they are
unhappy with road positon etc. This driver intervension can still be
used to train the artificial intelligent system and even if one of the rules
between 1 and 5 come into play then can be used to train the artificial
The argument that the artificial
intelligent system needs to identify everything before making a decision is
a very bad idea. If it can that is very good but if does not it should
just say there is an object do not know what it is but act on it anyway.