News

This algorithm by Indian researchers can remove caste, race, sex bias from AI. This is how

Prajanma Das

At a time when India has been reeling under a host of social evils and discrimination on the basis of caste, creed, gender and religion is the most pertinent of them all, an Indian researcher has developed a new algorithm that will help make artificial intelligence (AI) less biased when processing data.

But how can machines be biased, right? Dr Deepak Padmanabhan, the researcher from the School of Electronics, Electrical Engineering and Computer Science and the Institute of Electronics, Communications and Information Technology at Queen’s University Belfast, whose brainchild, FairKM tackles the discrimination problems within clustering algorithms. He says that AI algorithms learn from human behaviour and if the human behaviour is biased the AI is automatically biased too. "If everyone associates the colour blue to boys and pink to girls then an AI that is tasked to select toys for boys will select blue toys only because in this era of machine learning the AI reads from the data available to it to perform better," explained Dr Padmanabhan.



So what does FairKM do?
To put it simply it helps you shortlist candidates or recruit them without any bias. There were  ‘fair clustering’ techniques that were already prevalent in the developed countries but they could only incorporate one parametre. Dr Padmanabhan's FairKM can take into consideration multiple parameters.  “AI techniques for data processing, known as clustering algorithms, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. It is important that AI techniques be fair while aiding shortlisting decisions, to ensure that they are not discriminatory on such attributes,” he said.



In devising their clustering algorithm, Dr Padmanabhan and his team — researchers Savitha Abraham and Sowmya Sundaram from IIT Madras — went with the fairness philosophy called 'representational parity'. "This says that the proportion of specific groups in the dataset should reflect in the 'chosen subset' (any cluster, in our case). In case of gender, it comes down to ensuring the same gender ratio (as in the dataset) within every grouping or cluster. This is somewhat easy to conceptualise and operationalise in the case of single attributes such as gender, where there are possibly only two or three groups. Our unique contribution in this research is that of providing the algorithmic machinery to ensure 'representational parity' over a large set of sensitive attributes which may include some or all of gender, ethnicity, nationality, religion and even age and relationship status in many settings. It is indeed necessary to ensure fairness over a plurality of such 'sensitive' attributes in a number of settings so AI methods are better aligned to democratic values," explained Dr Padmanabhan, an alumnus of IIT Madras himself.



The Indian connection
FairKM can be of more use in the third world and developing countries said the young researcher. "Instances of discrimination and what AI can do to make 'opportunity' more inclusive inspired our project. Imagine recruitment for a company like Indian Railways, where hundreds of lakhs of applications come in and if the AI sorting them out is biased by class, creed, gender and religion, then the recruitment will never be effective. Since a country like India has so many parameters, FairKM is the solution," he added.  Fairness in AI techniques is of significance in developing countries such as India, agreed Savitha, "These countries experience drastic social and economic disparities and these are reflected in the data. Employing AI techniques directly on raw data results in biased insights, which influence public policy and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios.”



One challenge that the researchers said that they had sidestepped in this research is the question of fairness in data collection itself. "How do we know whether there is sampling bias such that the dataset itself has a different gender ratio than the population it is meant to represent. That may be regarded as more of a data collection challenge than an algorithm design challenge, but we are considering ways of addressing it through technological solutions," added the researcher who has been teaching at Queen's University for more than four years and is also an adjunct fellow of IIT Madras. “FairKM can be applied across a number of data scenarios where AI is being used to aid decision makings, such as pro-active policing for crime prevention and detection of suspicious activities. This, we believe, marks a significant step forward towards building fair machine learning algorithms that can deal with the demands of our modern democratic society,” he added.

The research, which was conducted at Queen’s University’s Computer Science building, will be presented in Copenhagen in April 2020 at the EDBT 2020 conference, which is renowned for data science research.

Bengaluru: BTech student allegedly falls to death from university hostel building; police launch probe

FIR lodged against unidentified man for making 'obscene' gestures in JNU

UGC launches 'SheRNI' to ensure women scientist representation

Father of Kota student who killed self suspects foul play, demands fair probe

Gorakhpur NCC Academy will inspire youth to contribute to nation-building: UP CM Adityanath