Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024
…
1 page
1 file
Talk lasting 45 minutes on AI, AI Ethics, Indigenous Peoples, Indigenous Rights at LSE at the London Mathematical Society School on 18 & 19th June 2024
2020
is also Kanaka Maoli, born and raised on the island of Hawaiʻi as part of the first generation to attend Hawaiian-language immersion schools. Dr. Parker Jones, like Dr. Arista and Lewis, is also interested in how kanaka culture can be expressed using computational methods. The connections to Hawaiʻi of three of the founders of the Indigenous Protocol and AI discussion suggested it as an appropriate place to anchor the conversation, and also made organizing the workshops on relatively short notice possible. Cultural Grounding Hawaiian genealogical chants make manifest "the inextricable connection between island home and successive generations of island people." 18 This connection is wide and deep, tying Kānaka Maoli into a web of relationships that extend outward to the non-human denizens of the islands, and backward in time to our ancestors. We felt that Kanaka Maoli knowledge frameworks provided a conducive background against which to think about our relationship to technology in general, and to AI specifically: Hawaiian custom and practice make clear that humans are inextricably tied to the earth and one another. Kanaka maoli ontologies that privilege multiplicity over singularity supply useful and appropriate models, aesthetics, and ethics through which imagining, creating and developing beneficial relationships among humans and AI is made pono (correct, harmonious, balanced, beneficial). 19 18 Arista, N. (2019). The Kingdom and the Republic: Sovereign Hawaiʻi and the Early United States. Philadelphia: University of Pennsylvania Press, p. 17. Following the workshop description is "AI: A New (R)Evolution or the New Colonizer for Indigenous Peoples?" an essay by linguist and te reo Māori specialist Dr. Hēmi Whaanga (Ngāti Kahungunu, Ngāi Tahu, Ngāti Mamoe, Waitaha). Dr. Whaanga warns of the potential for AI systems and related technologies to be used against Indigenous peoples as an extension of colonial practices of exploitation, extraction and control, particularly those that displace a peoples' understanding of themselves with a worldview that favors the colonizer. He discusses issues of data sovereignty in a technological landscape populated by AI systems existentially dependent on sucking up vast amounts of data on human activity, thereby putting Indigenous traditional knowledge and customary practices at risk of global-scale Following Benesiinaabandan's contribution is media studies scholar Ashley Cordes' (Coquille) essay, "Gifts of Dentalium and Fire: Entwining Trust and Care with AI." The overarching aim of Cordes' text is to argue for Indigenous people to seriously consider the use of blockchain combined with AI to help them manage their communities' business, making the case that such technologies can be used to increase Indigenous sovereignty and self-determination vis-à-vis the hegemon. A member of the Coquille tribe, she uses that community's notions of 'trust and care' to ground her vision of how the technologies should be properly designed and to map out how they might be implemented. Cordes also explores what it means to take seriously the admonishment to consider AI as non-human kin, including thinking about what the AI's needs might be. Lewis contributes "Quartet," composed of a poem sequence and a short description illustrating how epistemological diversity within AI design might look. The texts imagine a future where young Kānaka Maoli are raised along with three AIs, each built according to different conceptual frameworks. One AI takes inspiration from Kanaka notions of land, responsibility, and family; another from the Blackfoot language's basis in flow rather than objects; and the third from suppositions about how the octopus's nervous system is organized to accommodate the semi-autonomy of its arms. The three AIs and the human work collaboratively to make decisions in support of Kanaka flourishing that take the environment, human and non-human relations, and past-present-future into consideration.
AI and Society, 2023
As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the five Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge system.
Open Journal of Philosophy, 2021
The current "narrow" or "weak" form of artificial intelligence is, by itself, fundamentally a data analysis tool that does nothing more or less than its programming instructs it to do. It has no values or goals of its own, it simply follows the values and pursues the goals provided to it by its programmers. Artificial wisdom has the potential to make artificial intelligence a better tool and eventually perhaps more than a tool, but at least for now artificial wisdom must also be programmed and therefore similarly reflects only the wisdom of its programmers. Artificial intelligence, with its reductionistic ontology of data and its contrived epistemology of algorithms, is the quintessential product of the Western scientific worldview, and the development and application of artificial intelligence and discussions of artificial wisdom still largely reflect that one, narrow worldview. Artificial wisdom would greatly benefit from incorporating elements of non-Western worldviews, particularly the metaphysically inclusive Indigenous worldview. For example, the Navajo concept of hozho involves the normative values and goals of harmony, balance, interrelatedness, and connectedness. Hozho and other Indigenous concepts are potentially paradigm-shifting additions to artificial wisdom and could greatly enhance the usefulness of and overall benefit from applications of artificial intelligence.
Ethical Space: International Journal of Communication Ethics
Bias in artificial intelligence (AI) technology occurs when there have been prejudiced assumptions applied, whether unconsciously or consciously, throughout the development of an algorithm and the curation of the data. Current AI tools, especially natural language processing tools, largely have not been developed by Indigenous people with an Indigenous perspective. As a result, the output of that AI is often biased and can continue to perpetuate colonising logic. This paper explores the role of Indigenous leadership in creating AI technology in natural language processing. It discusses the use of a decolonising framework in shaping contemporary ethical practice in this landscape. Central to this framework is valuing the domain expertise of Indigenous knowledge experts in partnership with AI practitioners (Sambasivan and Veeraraghavan 2022), as well as making decisions informed by the historical context and elevating Indigenous philosophies. As an example of an Indigenous-led programme of work, this paper introduces the Papa Reo project, a multilingual language platform grounded in Indigenous knowledge and ways of thinking. The Papa Reo project is aiding the revitalisation of the Māori language in Aotearoa New Zealand through the creation of digital tools. It is unique in the AI space because the ethical practice is guided by Indigenous philosophy, led by an Indigenous organisation and it is actively working to Indigenise natural language processing AI. A key feature highlighted in this paper is the value that Māori language specialists have throughout the entire development pipeline of technology creation. From data curation to model analysis, Papa Reo is creating space for Māori in a predominantly Western communication landscape.
Māori Voices in the Artificial Intelligence (AI) Landscape of Aotearoa New Zealand; An interim research report, 2024
This preliminary kaupapa Māori research has analysed the representation that Māori have in the New Zealand AI commercial, industry and academic landscapes and looked at what voices and representation Māori have in this new and influential growth area.
2023
Until the beginning of the twentieth century, history, as a core concept of the political project of modernity, was highly concerned with the future. The many crimes, genocides, and wars perpetuated in the name of historical progress eventually caused unavoidable fractures in the way Western philosophies of history have understood change over time, leading to a depoliticization of the future and a greater emphasis on matters of the present. However, the main claim of the "Historical Futures" project is that the future has not completely disappeared from the focus of historical thinking, and some modalities of the future that have been brought to the attention of historical thought relate to a more-than-human reality. This article aims to confront the prospects of a technological singularity through the eyes of peoples who already live in a world of more-than-human agency. The aim of this confrontation is to create not just an alternative way to think about the future but a stance from which we can explore ways to inhabit and therefore repoliticize historical futures. This article contains a comparative study that has been designed to challenge our technologized imaginations of the future and, at the same time, to infuse the theoretical experiment with contingent historical experiences. Could we consider artificial intelligence as a new historical subject? What about as an agent in a "more-than-human" history? To what extent can we read this new condition through ancient Amerindian notions of time? Traditionally, the relationship between Western anthropocentrism and Amerindian anthropomorphism has been framed in terms of an opposition. We intend to prefigure a less hierarchical and more horizontal relation between systems of thought, one devoid of a fixed center or parameter of reference. Granting the same degree of intellectual dignity to the works of Google engineers and the views of Amazonian shamans, we nevertheless foster an intercultural dialogue (between these two "traditions of reasoning") about a future in which history can become more-than-human. We introduce potential history as the framework not only to conceptualize Amerindian experiences of time but also to start building an intercultural dialogue that is designed to discuss AI as a historical subject.
Treaty of Waitangi/Te Tiriti and Māori Ethics Guidelines for: AI, Algorithms, Data and IOT., 2020
The idea for this handbook arose in late 2017, with the working title Handbook of Ethics of AI in Context. By the time solicitations went out to potential contributors in the summer of 2018, its title had been streamlined to Handbook of Ethics of AI. Its essentially contextual approach, however, remained unchanged: it is a broadly conceived and framed interdisciplinary and international collection, designed to capture and shape much-needed reflection on normative frameworks for the production, application, and use of artificial intelligence in diverse spheres of individual, commercial, social, and public life.
AI and the Humanities, 2024
Artificial intelligence is a feminist issue, and technologies often have colonial implications. In fact, technologies as disruptive agents are inherently queer. This course examines the long history of technologies leading up to the public release of ChatGPT. We will chart the Western societies’ apprehension of and faith in, as the case may be, technologies of masculinist representation practices, as evidenced by science fiction, philosophical writing, and film culture. Students learn in a hands-on environment and conduct individual research projects. From generative AI as assistive technologies to long-standing humanistic questions of agency, identity, and mind and body, critical theory provides essential tools to participate in current cultural discourses. Through the lens of social justice, this course equips students with critical AI literacy as well as fluency in posthumanism, feminism, trans/ queer studies, and critical race theory.
Description This course is designed to make the scholar familiar with the application of normative ethics, metaethics, and practical ethics to the field of AI and to AI technologies. It now has sections on software qualities, black box algorithms, epistemic opacity, and the paradox of opacity. Topics covered: • AI, information transmission, information processing, and privacy o Big data and privacy o Big data and human identity o Gender and cultural bias • Black boxes o Big data, recurrent neural nets, black boxes, and social construction • Ethics of information and Ethics of AI o Ethical issues for different strengths/grades of AI and AI algorithms o Medium to strong AI: the moral relevance and effects of its ontological differences • Black Box Algorithms and Epistemic Opacity o Deep learning neural network basics: neuron nodes, back propagation, gradient descent, and recursive or recurrent neural networks o Software qualities: documentability, readability, interpretability, and transparency • Normative ethics proposals: advantages and disadvantages o Rule consequentialism o Deontological approaches o Care ethics o Virtue Ethics o Problems with implementation o Problems with uptake and enforcement • Software qualities and normative ethics o Interpretability, transparency and normative ethics o Interpretability, transparency and policy making o Extensibility, usability, and communicability • Ethics of AI on the Web and in Web based applications • AI technology and social hierarchy • The relationship between AI and the posthuman • AI and transhumanism (neo-cybernetic enhancement) o AI and extended cognition o Embedded AI • Strong AIs as potential epistemic and moral agents o Models, representations, and introspection o Interventions and counterfactuals o Emulating plasticity: synaptic plasticity versus intrinsic plasticity o Imagination • AI algorithms and existential threat o The paradox of opacity (Long, B. American Philosophical Association Blog May 2020).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Data science journal, 2024
AI and Epistemic Injustice - Chair's Response, 2018
Course syllabus, 2024
Journal of Science and Technology of the Arts
Journal of the American Association for the Advancement of Curriculum Studies (JAAACS), 2024
2022 ACM Conference on Fairness, Accountability, and Transparency
ASCILITE Publications, 2023
BJHS Themes, 2023
The Democratization of Artificial Intelligence
CIFILE Journal of International Law, 2020