Protect AI lands a $13.5M investment to harden AI projects from attack

Tech

Products You May Like

Seeking to bring greater security to AI systems, Protect AI today raised $13.5 million in a seed-funding round co-led by Acrew Capital and Boldstart Ventures with participation from Knollwood Capital, Pelion Ventures and Aviso Ventures. Ian Swanson, the co-founder and CEO, said that the capital will be put toward product development and customer outreach as Protect AI emerges from stealth.

Protect AI claims to be one of the few security companies focused entirely on developing tools to defend AI systems and machine learning models from exploits. Its product suite aims to help developers identify and fix AI and machine learning security vulnerabilities at various stages of the machine learning life cycle, Swanson explains, including vulnerabilities that could expose sensitive data.

“As machine learning models usage grows exponentially in production use cases, we see AI builders needing products and solutions to make AI systems more secure, while recognizing the unique needs and threats surrounding machine learning code,” Swanson told TechCrunch in an email interview. “We have researched and uncovered unique exploits and provide tools to reduce risk inherent in [machine learning] pipelines.”

Swanson co-launched Protect AI with Daryan Dehghanpisheh and Badar Ahmed roughly a year ago. Swanson and Dehghanpisheh previously worked together at Amazon Web Services (AWS) on the AI and machine learning side of the business; Swanson was the worldwide leader at AWS’s AI customer solutions team and Dehghanpisheh was the global leader for machine learning solution architects. Ahmed became acquainted with Swanson while working at Swanson’s last startup, DataScience.com, which was acquired by Oracle in 2017. Ahmed and Swanson worked together at Oracle as well, where Swanson was the VP of AI and machine learning.

Protect AI’s first product, NB Defense, is designed to work within Jupyter Notebook, a digital notebook tool popular among data scientists within the AI community. (A 2018 GitHub analysis found that there were more than 2.5 million public Jupyter Notebooks in use at the time of the report’s publication, a number that’s almost certainly climbed since then.) NB Defense scans Jupyter notebooks for AI projects — which usually contain all the code, libraries and frameworks needed to train, run and test an AI system — for security risks and provides remediation suggestions.

What sort of problematic elements might an AI project notebook contain? Swanson suggests internal-use authentication tokens and other credentials, for one. NB Defense also looks for personally identifiable information (e.g., names and phone numbers) and open source code with a “nonpermissive” license that might prohibit it from being used in a commercial system.

Jupyter Notebooks are typically used as scratchpads rather than production environments, and most are locked safely away from prying eyes. According to an analysis by Dark Reading, fewer than 1% of the approximately 10,000 instances of Jupyter Notebook on the public web are configured for open access. But it’s true the exploits aren’t just theoretical. Last December, security firm Lightspin uncovered a method that could allow an attacker to run any code on a victim’s notebook across accounts on AWS SageMaker, Amazon’s fully managed machine learning service.

Other research firms, including Aqua Security, have found that improperly secured Jupyter Notebooks are vulnerable to Python-based ransomware and cryptocurrency mining attacks. In a 2020 Microsoft survey of businesses using AI, the majority said that they don’t have the right tools in place to secure their machine learning models.

It might be premature to sound the alarm bells. There’s no evidence that attacks are happening at scale, despite a Gartner report predicting an increase in AI cyberattacks through the end of this year. But Swanson makes the case that prevention is key.

“[Many] existing security code scanning solutions are not compatible with Jupyter notebooks. These vulnerabilities, and many more, are due to a lack of focus and innovation from current cybersecurity solution providers, and is the largest differentiation for Protect AI: Real threats and vulnerabilities that exist in AI systems, today,” Swanson said.

Beyond Jupyter Notebooks, Protect AI will work with common AI development tools, including Amazon SageMaker, Azure ML and Google Vertex AI Workbench, Swanson says. It’s available for free to start, with paid options to be introduced in the future.

“Machine learning is … complex and the pipelines delivering machine learning at scale create and multiply cybersecurity blind spots that evade current cybersecurity offerings, preventing important risks from being adequately understood and mitigated. Additionally, emerging compliance and regulatory frameworks continue to advance the need to harden AI systems’ data sources, models, and software supply chain to meet increased governance, risk management and compliance requirement,” Swanson continued. “Protect AI’s unique capabilities and deep expertise in the machine leaning lifecycle for enterprises and AI at scale helps enterprises of all sizes meet today’s and tomorrow’s unique, emerging and increasing requirements for a safer, more secure AI powered digital experience.”

That’s promising a lot. But Protect AI has the advantage of entering a market with relatively few direct competitors. Perhaps the closest is Resistant AI, which is developing AI systems to protect algorithms from automated attacks.

Protect AI, which is pre-revenue, isn’t revealing how many customers it has today. But Swanson claims that the company has secured “enterprises in the Fortune 500” across verticals, including finance, healthcare and life sciences, as well as energy, gaming, digital businesses and fintech.

“As we grow our customers, build partners and value chain participants we will use our funding to add additional team members in software development, engineering, security and go-to-market roles throughout 2023,” Swanson said, adding that Protect AI’s headcount stands at 15. “We have several years of cash runway available to continue to advance this field.”

Products You May Like

Articles You May Like

Google Chrome for iOS Receives Shopping Insights, Other New Features
Disney Touts Two-Year Turnaround Behind Studio’s Stellar Q4
Vivo X200 Series Global Launch Date Confirmed; Mini Model May Remain Exclusive to China
Mysterious ‘Interstellar Tunnel’ Found in Our Local Pocket of Space : ScienceAlert
Blade MCU Movie Still in the Plans