Page 144 - Proceedings of the 2018 ITU Kaleidoscope
P. 144
2018 ITU Kaleidoscope Academic Conference
Bolukbasi et al, 2016[20] explain that this problem can be be solved and their optimum solutions. In the long run, this
attributed to the blind adoption of “word embedding” could very well lead to the development of breakthrough
techniques. Word embedding enables the mapping of the technologies, the benefits of which may ultimately trickle
affinity or relationship between different words, where a down the marginalized sections of society. However, there
public resource like Google News serves as the training is a distinction between retrofitting newer objectives into
dataset. The researchers illustrate how this could influence available technologies versus a ground up approach of
the search results for a person looking for a computer identifying specific problems and developing solutions for
science researcher in a particular university because the them.
words “computer science” are more commonly associated
with men -- “between two pages that differ only in the The latter approach would require a more meaningful
names Mary and John, the word embedding would engagement by businesses, governments and the public in
influence the search engine to rank John’s web page higher identifying AI research agendas and supplying resources to
than Mary ” (Bolukbasi et al, 2016[20]). Similar findings of pursue them. These resources could be in the form of
gender biases been also been made in case of visual financial support, ethical frameworks, as well as making
recognition tasks like captioning of images (Zhou et al., available open data resources that can feed into the design
2018[21]) and display of image search results based on of AI solutions. For instance, the development of AI
occupations (Kay, 2015[22]). applications that are useful for addressing the health
concerns of rural women in a developing country like India
These examples demonstrate that AI applications can often may not be an obvious interest area for many AI
end up strengthening and reinforcing society's existing researchers. This may stem both from the lack of funding
biases. For instance, Zhou et al., 2018[21] found that where for sustained research in such areas and also the lack of
training images for the activity of cooking contained 33% access to the data that is necessary for enabling this
more females, the trained model for captioning images research. Similarly, the ways in which algorithmic credit
amplified the disparity to 68%. This seems to run contrary will work out in the Indian setting may be very different
to Donna Haraway’s vision of a cyborg universe where from what happens in other parts of the world. Agenda
technology would offer a tool to break away from the setting for future AI research must therefore be rooted in
dualities of human-machine and male-female identities the social and cultural backdrop and institutional context of
(Haraway, 1991[23]). This is an inspiring idea and one that each society.
we still have an opportunity to fix. Concepts of equity,
fairness and non-discrimination have been well entrenched Having said that, there is also a case for evolving a robust
in the human rights discourse for the past several decades. set of ethical standards for AI research and the tools for
Yet, conscious and unconscious human biases often prevent translating those principles into tangible outcomes.
these values from translating into actual outcomes. How Questions of bias and ethics have already found a place in
then can we re-envision AI research in ways that could many national AI strategies. For instance, the United
move us closer to this ideal? Kingdom has noted that although it cannot match countries
like the United States and China in terms of AI spending, it
4. RE-ENVISIONING AI FROM A GENDERED intends to play a greater role in AI's ethical development
PERSPECTIVE (House of Lords, 2018[25]). In India, a discussion paper
issued by the Government think tank NITI Aayog (NITI
Improving the representation of women in AI research, both Aayog, 2018[26]) as well as an AI Task Force set up by the
as researchers and as beneficiaries of the research is seen as Indian Government have spoken about the need for ethical
a first step towards a gendered re-envisioning of AI. This standards, including auditing of AI to check that it is not
has led to initiatives like having specialized programmes contaminated by human biases (AI Task Force, 2018)[27].
for women, funding support, mentorship initiatives, Both these documents are, however, conspicuously silent on
increased intake in educational institutions and promoting the gender dimensions of AI education and research in the
equal opportunities in the job market. However, even if country. Most large technology companies also have
such initiatives were to succeed, it is questionable whether internal ethics policies to govern their research initiatives.
merely increasing the number of women can bring the Moving from these siloed structures to a collectively
desired level of diversity in AI knowledge-making. designed set of global minimum standards for AI
development should be the next goal. These principles can
In her work on objectivity and diversity, Sandra Harding then be applied based on each region’s own context.
notes that although increasing the physical presence of
excluded groups is an important first step, the real issue This above proposal comes with the worry that absent strict
goes beyond that of participation. It involves questioning enforcement, producers would tend to interpret any ethical
whose agendas should be pursued by science? (Harding, guidelines in a flexible manner. This could resulting in the
2015[24]). A research agenda that is primarily funded under-production of “fairness” in the system. The opacity of
through private resources will logically rely on market AI algorithms and possibility of diverse interpretations on
mechanisms to decide on the kind of problems that need to what constitutes fairness in any given situation only
– 128 –