Microsoft Shows How AI Can Be Biased and Misleading

0

Biased AI

During Microsoft Ignite The Tour event in Mumbai, Microsoft organized a few mini learning sessions where they talked about their different platforms & how can those be utilized to improve software & businesses. Some of those sessions were about AI as well. In one of those AI sessions, they showed a preview about how AI can be biased & sometimes even misleading.

So how can AI be biased anyway?

To understand this, you first need to understand how Machine Learning (which is an essential part of AI) generally works. Initially, specific types of data sets are analyzed using some complex algorithms. These data sets can be of text, images, voice samples, etc. The results are used to train software models so, they can learn to recognize similar types of text or objects that it was trained with. Here the model is literally predicting the result.

This whole process needs to be created by the developer of that Machine Learning program. But, the subconscious bias of human beings can drive them to create ML models that are also biased of the things that their creators are too. The example that Microsoft demonstrated during the event, was about AI being used to select toys for boys & girls.

Also Read: Artificial Intelligence (AI) In Smartphones – The Truth!

In this case, they used a bunch of images of toys to train the model so, it can differentiate between which ones are for boys & which ones are for girls. Later a bunch of mixed up images was uploaded for the model to recognize them. Now, with general training preferences, it was obvious that the model was recognizing things like toy cars & balls as toys for boys while it was recognizing stuff like dolls & teddy bears as toys for girls. They also told us that whenever the pink color appears, these kinds of models tend to think that it should be associated with girls.

Not every time, assumptions like these turn out to be true. But, because the model was created partially biased, the results come out to be biased too. Sometimes these models can even be misleading & can influence the users to make wrong decisions.

How to eliminate this issue?

As subconscious biased isn’t something you can control every time, Microsoft has built a set of tools to assist you. They’re calling it Microsoft Cognitive Services. Users can use this service to create Machine Learning models that are less or sometimes not biased at all. Also, if you’re a developer, you need to try to make your model with lesser factors that can make it biased or misleading. If you can make sure of these facts, you’ll be able to eliminate biased factors. So, at the time your model is ready, it will be more accurate & fair while making decisions. If you need assistance, you can use Microsoft’s Cognitive Services for that as well. They also have something called “Labs” where you can get your hands on similar unreleased projects. So, you can check that too if you want to do some testing. 

Leave A Reply