In 10 years, all Bosch product will either contain machine learning and artificial intelligence (AI), or it will be involved in the development process. This statement was repeated over the course of the day at the inaugural Bosch AI CON18, which took place in Renningen, Germany.
The conference brought together some of the world’s smartest minds in AI from academia, industry players like Porsche and Graphcore, and of course the minds at Bosch. The presentations throughout the day provided a balance of ethical debate around how we move forward as an industry and research sessions that were equation heavy and targeted at academics.
The truth is that Bosch isn’t wrong; AI will touch everything. But it is a stretch to say that AI will be in everything: there are products that don’t make sense or we just wouldn’t want AI to be in. However, they are right to say that the process behind the creation of everything has the potential to be touched by AI. It can be little things in the manufacturing process, for example. Quality assurance is an easy win for AI. Today, many products are given a once over with a camera to look for defects. Of course, Bosch has larger ambitions when it seeks to have AI touch everything it does.
To give a bit of refresher, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.
At its core, machine learning is simply a way of achieving AI.
While it isn’t so easy to see how AI could make all products useful, it’s easy to see how in there invention or manufacturing AI could optimize the process.
As I walked into AI CON18, I was greeted with many different examples of machine learning and AI.
Bosch showed a self-driving car running a demo showing semantic segmentation. This means that the car is capable of doing level 2 with a single camera. Level 2 is by no means impressive, however using a single camera to do it is interesting. They’re able to perform the same tasks on the road however they do it with 50 times less calculations. This reduces the response time and the computing power needed in the car.
“Semantic segmentation” was a new buzzword for me. It’s worth unpacking before we move on because it is considered to be one of the key problems in computer vision.
Semantic segmentation achieves fine-grained inference by making dense predictions by inferring labels for every pixel, so that each pixel is labeled with the class of its enclosing object or region.
If you look at the photo below, semantic segmentation has allowed the computer to determine that the buildings are all in red, the road in purple, roadlines in yellow, etc. Where it gets interesting in the demo that Bosch was showing is the ability to do more safely and accurately with less information.
They’re demonstrating what is possible with existing camera technology when it is paired with their platform’s AI capabilities.
Carrying on through the show floor, there was an AI creating art in real time. SoundSee is a listening array heading to the International Space Station next year. A pair of Raspberry Pi enabled cars drove around a track showcasing a course aimed at teaching machine learning to people all skill levels. And the Indego S+ an intelligent robot lawn mower.
Each of these products has machine learning and AI at their core, but they’re also fundamentally different from each other in their functions and abilities.
Indego S+ shows off some real robotic smarts
Robot lawn mowers are an easy way to understand how AI can make robots smarter. “SmartMowing” analyzes temperature and rainfall to automatically calculate the best time for the next cut. Autonomous lawn care is made more efficient with the “MultiArea” that lets you mow different areas around your home and “SpotMow” which lets you target small, specific areas.
The intelligence behind the camera technology has a completely different set of problems to tackle that of an autonomous car. Yet it’s easy to understand how advancements to the AI technology that backs them could benefit anything with a camera looking to move through our world.
AI CON18 tackled the question of how to move forward, not only tackling the ethical questions that challenge AI, but the practical question of how we can bridge the gap between academia, startups and a company like Bosch.
Michael Bolle CTO & CDO at Bosch held a session entitled Corporate Research Meets Academic Research and fielded the question:
Solving the challenges of AI won’t happen by companies alone, nor do we want them to. An open dialogue which involves sharing of information is key to meaningful development.
The message of openness and transparency were consistent among Bosch’s speakers throughout the day. The day kicked off with a statement from Bosch CEO Volkmar Denner:
“AI should only act on the framework set by society. We believe the AI should be safe, reliable & transparent.”
Bosch plans on holding AI CON again next year, for more information please visit Bosch-AI.com
It is worth noting that next year they will be holding a Bosch AI Young Researcher Award for AI CON 2019. Its aim is to promote young talent in this field, Bosch will be presenting the Bosch AI Young Researcher Award, endowed with 50,000 euros, for the first time next year.
Bosch sponsored my attendance at AI CON18 and this post of my experiences as an attendee. All ideas are still my own.