BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Women In AI: IBM’s Lisa Amini Takes On AI Security And Reasoning

Following
This article is more than 4 years old.

The rapidly evolving market of Artificial Intelligence (AI) has a rich history dating back to Alan Turing's seminal paper on "Computing Machinery and Intelligence" in the 1950s. The paper was the first to introduce his concept of what is now known as the Turing test, which is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Decades later, the AI market holds many opportunities and unanswered questions. 

AI promises to improve products, services and jobs. The combination of vast volumes of data and readily available, cost-effective computing has enabled the AI market to grow at a rapid pace in the past decade. Researchers and data scientists have made tremendous progress in designing models and gaining new insight with machine learning (ML) and the nascent but promising subset of ML called deep neural networks. Yet, AI also holds a myriad of challenges, including selecting the right use cases for AI. Increasingly, companies also struggle with how to trust a decision made by a machine. 

Recently I had the opportunity to speak with Dr. Lisa Amini, the Director of IBM Research Cambridge, about how her team's research is working to addressing AI’s challenges. IBM Research Cambridge is also home to the MIT-IBM Watson AI Lab, and of the IBM AI Horizons Network. Amini's roots in the AI and research field go deep. She was previously Director of Knowledge & Reasoning Research in the Cognitive Computing group at IBM's TJ Watson Research Center in New York and is also an IBM Distinguished Engineer. As the founding Director of IBM Research Ireland, Dr. Amini was first woman Lab Director for an IBM Research Global (i.e., non-US) Lab, and a role model for women in the STEM field.

Dr. Amini's Cambridge-based team works closely with MIT on 60 joint projects through the MIT-IBM Watson AI Lab. IBM’s David Cox is the Director of the MIT-IBM Watson AI Lab. Many of the research projects in the lab focus on resolving key challenges with AI in the areas of reasoning, security, ethics, and explainability. While we discussed all of these areas, one area that's frequently overlooked is security. Over 88 percent of the business leaders interviewed in a Lopez Research study claimed that securing corporate and customer data was one of the top three initiatives for their organization. Companies are spending millions to secure today's hardware and software stack, but what about AI?

Security: The Next Frontier

Models are at the heart of AI, but most of the energy in the space has focused on creating, not securing, models.  Advanced machine learning models, such as deep neural networks, are vulnerable to malicious attacks just like PCs and applications. Types of AI model security attacks include poisoning, evasion, backdoor and model extraction. AI models, like other computer programs, can be embedded with backdoors. However, backdoors in AI models are harder to detect because neural network model only consists of a set of parameters, without source code.

Other attacks focus on changing malicious data to make it appear legitimate at test time (evasion) and adding malicious data during the retraining of models (poisoning). In a poisoning attack, the attacker may inject carefully crafted samples to contaminate the training data in a way that eventually impairs the normal functions of the AI system. Attackers could subtly change data in an image recognition model in a way that would be imperceptible to a human (poisoning) but would create a different, and incorrect, outcome in an AI model. Researchers have demonstrated how changes to the data (called perturbation) could convince a model that a Panda is a different animal. Other examples show how perturbation can fool a model into classifying a stop sign as a speed limit sign. While it's easy to illustrate how poisoning effects image recognition, this type of security attack can affect any data.  

In model extraction, the adversary may only have access to a publicly facing piece of a model, such as an API which he queries to extract information about how the model works. Once the attacker understands the model, he/she can do many things such as train a substitute model, steal data from the model and use the insight to launch poisoning attacks. 

All of these types of attacks speak to the need for a more robust way of assessing the potential vulnerability of an AI model. With such a potentially daunting challenge ahead of us, how do we even get started? Amini and the MIT-IBM team are working on a critical research project in the space called "Certified Robustness Evaluation of Neural Networks and of the Underlying Datasets". The goal of the project is to provide a certified robustness evaluation of models. It's designed to be an attack-agnostic robustness evaluation – applicable to both existing and unseen attacks. The research paper, presented at the ICLR 2018 Conference, proposes a new metric to evaluate the robustness of neural networks to adversarial attacks. This metric comes with theoretical guarantees and can be efficiently computed on large-scale neural networks. 

Models are tested when they are first created, but many aren't tested for security vulnerabilities. Security testing becomes more difficult as models become more complex. Additionally, it's just as important to test both the results of a model and the security posture of a model as we add more data and retrain. A robustness score could dramatically improve a data scientists ability to improve an AI model's security posture. Hopefully, approaches such as the scoring proposed in the ICLR paper can help companies mitigate certain security risks. 

Dr. Amini shared insights on numerous other IBM projects, including neuro-symbolic question answering. Reading today's articles, you'd think that AI is capable of replicating a majority of human tasks. It's clear from my discussion with Amini that while AI can perform many advanced tasks, it’s still lacking in some foundational areas. For example, a machine can't answer certain questions that are very easy for humans to answer such as "How many blocks are on the right of the three-level tower?" or "Are there more trees in the image than animals". Answering these questions requires both parsing vision and language.  

When we see such amazing examples of image recognition and classification, it's hard to imagine that a machine can't easily answer these types of questions. It's equally shocking to imagine a hacker changing the outcome of a model by injecting additional data. The discussion of the current research prospects provided insight on how much work is yet to be done. It's great to see researchers tackling these key challenges.

Follow me on Twitter or LinkedInCheck out my website