Harvard professor launches startup to help companies protect their AI systems from hackers
It is one thing to climb the academic ladder at one of the world’s most prestigious universities and gain full professorship in just seven years, but it is another thing altogether to take on the world’s hackers and prevent them from messing with companies’ AI systems. However, this is exactly what computer science professor Yaron Singer has set out to do.
Singer has spent the bulk of his academic career focusing on adversarial machine learning, which is a neat way to fool artificial intelligence models using misleading data. Considering that companies around the world have to put up with hackers deploying all kinds of tools and techniques to confuse and corrupt their AI systems, it is fair to say that Singer is the right man to solve this kind of problem.
Along with a former Ph.D. advisee and two former students, Singer launched Robust Intelligence, a startup that has raised $14 million in seed funding in a bid to make AI smart enough to avoid being fooled by hackers. According to him, the company’s platform has been trained to detect more than 100 types of adversarial attacks.
Singer’s motivation for creating a platform like this goes way back to his years spent at Google as a postdoctoral researcher after having completed his Ph.D. in computer science from the University of California at Berkeley. Having seen and experimented with a whole host of machine learning models during his time at Google, Singer realized how easy it was to make AI go awry with bad data.
“Once you start seeing these vulnerabilities, it gets really, really scary, especially if we think about how much we want to use artificial intelligence to automate our decisions,” he said.
And ultimately, it is these vulnerabilities that AI hackers and fraudsters exploit. For instance, a check for $400 can easily be manipulated by tweaking a few pixels here and there to cause an AI model to read it as a check for $700. Large-scale financial fraud is conducted in a similar way, with spoofed voice and facial recognition being used to mess around with the system even more. In fact, a deepfake posing as a company CEO can easily convince the company to transfer a huge sum of money into the hacker’s bank account.
These are the kinds of problems Singer wants to solve. His startup has already launched two products: an AI firewall and a tester for the strength of an AI model. The firewall uses special algorithms to scan for contaminated data, while the tester (known as Rime) performs extensive stress tests of a customer’s AI model by inputting basic mistakes and following them up with adversarial attacks to see how well the system performs.
With 15 employees, Robust Intelligence is currently working with ten customers, including a leading financial institution. For Singer, making AI systems even more secure is yet another challenge in his life that he looks forward to.
“For me, I’ve climbed the mountain of tenure at Harvard, but now I think we’ve found an even higher mountain, and that mountain is securing artificial intelligence,” he said.