As businesses race to adopt artificial intelligence (AI), their ability to use it ethically, in ways that generate trust from customers, partners, and the public, will become a competitive differentiator. At Bosch Connected World 2020 Bosch announced their AI Code of Ethics.
Gaining the trust of customers and consumers is hard work. Ensuring that AI is grounded in an ethical framework tightly bound to core values, is key to building trust. Bosch has released a set of guidelines that are accessible and explainable. Interestingly, Bosch was a part of the AI discussion with the EU from the start and participated in pilot programs to create AI guidelines.
At a glance here are the guidelines of Bosch’s AI code of ethics:
1. All Bosch AI products should reflect our “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility.
2. AI decisions that affect people should not be made without a human arbiter. Instead, AI should be a tool for people.
3. We want to develop safe, robust, and explainable AI products.
4. Trust is one of our company’s fundamental values. We want to develop trustworthy AI products.
5. When developing AI products, we observe legal requirements and orient to ethical principles.
Read the full guidelines here.
Why companies need an AI Code of Ethics?
Three-fourths of consumers today say they won’t buy from unethical companies, while 86% say they’re more loyal to ethical companies, according to the 2019 Edelman Trust Barometer. In Salesforce’s recent Ethical Leadership and Business survey, 93% of consumers say companies have a responsibility to positively impact society. Businesses are being held more accountable than ever for what they do and how they behave.
Other reasons are less obvious but just as important. AI is forcing conversations about corporate trust and ethical use because it holds up a mirror to human behavior; it amplifies preconceptions and biases that can adversely influence business decisions.
We need these conversations because using AI without an ethical framework isn’t like making a single mistake. If training data is biased, mistakes will be compounded as the algorithms continue to “learn” from flawed data, and the potential for repeatable offenses is greater—such as automated decisions and predictions that could affect a person’s chance of getting a loan or could fail to diagnose illness or disease. Companies must consider the interplay of AI, trust, and culture. These factors affect each other and are critical to developing an ethical framework for AI.
Bosch started off their #BWC20 press conference by acknowledging the climate change is real and reminding us that they already went CO2 neutral in Germany in 2019. Bosch is working to make the black box of AI ethics transparent and a part of their corporate value systems. With the EU announcing the guidelines later today, this won’t be the last company we hear releasing a Code of AI Ethics.