Balancing out the benefits of artificial intelligence to prevent future pitfalls
Ethics surrounding artificial intelligence and the use of Big Data were among the topics discussed at the GRC Summit in London this week.
Organizations looking to implement data-driven tools and leverage the benefits of artificial intelligence (AI) must first understand the risks that these technologies can pose.
That was the consensus held by a panel of industry stakeholders, who cited transparency, standards, and explainability as factors for businesses to consider when creating AI products.
The panel, which took place in London on Monday (November 18), included Laura Turner of the UN’s World Food Programme, and Anna Felländer, co-founder of the AI Sustainability Center.
“The reason why ethics is exploding is because AI is different from other data-driven technologies, AI moves faster,” Felländer said.
“There’s no transparency and a lack of explainability models.
Machine learning (ML) – data trained algorithms that facilitate the automation of tasks – and AI – human behavior learned in machines – are increasingly seen on the marketplace amid confusion of their actual capability.
According to one survey, 40% of European startups are misusing the term AI in their products, The Verge reported in March – leading to more funding from investors and a less efficient experience for consumers.
In the security sector, where ML and AI have the potential to identify cyber-threats far faster and more accurately, a quarter of organizations told the Ponemon Institute that they had been using some form of the technologies in their defense solutions.
“Organizations supply this seductive technology into their business models and push down costs, nudging their customers to behaviors that could be unethical,” Felländer said.
Regardless of the amount of snake oil out there, the amount of data now available allows for the algorithmic maturity needed to build products and services – and equally made ethics surrounding ML and AI a more pressing concern.
Global governments, most notably in the European Union, have even made calls to regulate the use of AI so to prevent potential societal issues such as bias within algorithmic decision-making, violations of user privacy, and dangers in line with cyber-offensive firepower.
“When we enter AI you get no control of it,” Felländer said.
“It’s about having a goal to market readiness in your AI applications, so you don’t lead to the [possible] pitfalls, and making sure your values are sustained.”
Sir Nigel Shadbolt, chairman and co-founder of the Open Data Institute, who closed the first day of the conference, agreed that more literacy and communication was needed, not only around ML and AI, but the wider data ecosystem.
“We’re seeing people starting to really worry about the idea of the balance, the interests that they have, the rights that they have, in this data, is somehow way out of whack,” he said.
“It’s not about owning the data, but having some agency on what’s being done with it.”
YOU MIGHT ALSO LIKE A job fit for a bot: Artificial intelligence is helping to close the security gap