Microsoft partners up with non-profits to build friendly AI systems for disabled people
Artificial intelligence holds the power to revolutionize our lives, but there are still quite a number of gaps in the field that we need to address. For Microsoft, a troubling aspect is the lack of data on disabled people to train disabled-friendly AI systems.
In a partnership with several nonprofit organizations, Microsoft is working towards building AI-based tools that reflect the realities and daily needs of disabled people. It is a fact that the vast majority of datasets for common applications like facial recognition and gaze tracking systems are sourced from able people. This introduces a bias within the AI system, which becomes difficult to use for blind and physically impaired individuals as it is not optimized for their specific needs.
A good example would be a typical facial recognition system. A perfectly healthy person like me can use that system with little to no difficulty, but is it really optimized for people wearing headstraps, or for those hooked up to a ventilator? How do people on wheelchairs interact with that system? Surely their angle of vision is different to mine, and since the facial recognition system hasn’t accounted for these factors, it will fail to do its job as accurately with such individuals as it will with me.
This lack of inclusive data has been termed a “data desert” by Microsoft, which has taken up the task of partnering up with various organizations to improve the inclusivity of datasets for AI systems.
The first of these partnerships is a collaboration with Team Gleason, which creates awareness regarding amyotrophic lateral sclerosis (ALS). Individuals suffering from ALS can’t move their heads around much, which means that typical gaze tracking systems, for instance, will be unable to perform sufficiently well for them.
Yet another collaboration is one with the City University of London for the expansion and public release of the Object Recognition for Blind Image Training project. Once again, the idea is to generate computer vision datasets that can cater to the specific needs of visually impaired people as well.
Inclusivity isn’t an unachievable mystery. It is something that we can definitely make possible by simply changing the way we think about compiling our datasets and eventually creating our AI systems.
“This is stuff the ALS community wanted years ago,” said Team Gleason’s Brian Casey. “This is technology that exists — it’s sitting on a shelf. Let’s put it to use. When we talk about it, people will do more, and that’s something the community needs as a whole.”