Artificial Intelligence & Machine Learning
What is AI and ML?
Artificial Intelligence (AI) is a field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Put another way, AI is a catch-all term used to describe new types of computer software that can mimic human intelligence. There is no single, precise, universal definition of AI.
Machine learning (ML) is a subset of AI. Essentially, machine learning is one of the ways computers “learn.” ML is an approach to AI that relies on algorithms on large datasets trained to develop their own rules. This is an alternative to traditional computer programs, in which rules have to be hand-coded in. Machine Learning extracts patterns from data and places that data into different sets. ML has been described as “the science of getting computers to act without being explicitly programmed.” Two short videos provide simple explanations of AI and ML: What Is Artificial Intelligence? | AI Explained and What is machine learning?
Other subsets of AI include speech processing, natural language processing (NLP), robotics, cybernetics, vision, expert systems, planning systems and evolutionary computation. (See Artificial Intelligence – A modern approach).
The diagram above shows the many different types of technology fields that comprise AI. When referring to AI, one can be referring to any or several of these technologies or fields, and applications that use AI, like Siri or Alexa, utilize multiple technologies. For example, if you say to Siri, “Siri, show me a picture of a banana,” Siri utilizes “natural language processing” to understand what you’re asking, and then uses “vision” to find a banana and show it to you. The question of how Siri understood your question and how Siri knows something is a banana is answered by the algorithms and training used to develop Siri. In this example, Siri would be drawing from “question answering” and “image recognition.”
Most of these technologies and fields are very technical and relate more to computer science than political science. It is important to know that AI can refer to a broad set of technologies and applications. Machine learning is a tool used to create AI systems.
As noted above, AI doesn’t have a universal definition. There are lots of myths surrounding AI—everything from the notion that it’s going to take over the world by enslaving humans, to curing cancer. This primer is intended to provide a basic understanding of artificial intelligence and machine learning, as well as outline some of the benefits and risks posed by AI. It is hoped that this primer will enable you to a conversation about how best to regulate AI so that its potential can be harnessed to improve democracy and governance.
Algorithm:An algorithm is defined as “a finite series of well-defined instructions that can be implemented by a computer to solve a specific set of computable problems.” Algorithms are unambiguous, step-by-step procedures. A simple example of an algorithm is a recipe; another is that of a procedure to find the largest number in a set of randomly ordered numbers. An algorithm may either be created by a programmer or generated automatically. In the latter case, it is generated using data via ML.
Algorithmic decision-making/Algorithmic decision system (ADS): A system in which an algorithm makes decisions on its own or supports humans in doing so. ADSs usually function by data mining, regardless of whether they rely on machine learning or not. Examples of a fully automated ADSs are the electronic passport control check-point at airports, and an online decision made by a bank to award a customer an unsecured loan based on the person’s credit history and data profile with the bank. An example of a semi-automated ADS are the driver-assistance features in a car that control its brake, throttle, steering, speed and direction.
Big Data: Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. Data is classified as Big Data based on its volume, velocity, variety, veracity and value. This video provides a short explainer video with an introduction to big data and the concept of the 5Vs.
Class label: The label applied after the ML system has classified its inputs, for example, Is a given email message spam or not spam?
Data mining: “The practice of examining large pre-existing databases in order to generate new information.” Data mining is also defined as “knowledge discovery from data.”
Deep model, also called a “deep neural network” is a type of neural network containing multiple hidden layers.
Label: A label is what the ML system is predicting.
Model: The representation of what a machine learning system has learned from the training data.
Neural network: A biological neural network (BNN) is a system in the brain that enables a creature to sense stimuli and respond to them. An artificial neural network (ANN) is a computing system inspired by its biological counterpart in the human brain. In other words, an ANN is “an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner.” Large-scale ANNs drive several applications of AI.
Profiling: Profiling involves automated data processing to develop profiles that can be used to make decisions about people.
Robot: Robots are programmable, artificially intelligent automated devices. Fully autonomous robots, e.g., self-driving vehicles, are capable of operating and making decisions without human control. AI enables robots to sense changes in their environments and adapt their responses/ behaviors accordingly in order to perform complex tasks without human intervention. – Report of COMEST on robotics ethics (2017).
Scoring: ¨Scoring is also called prediction, and is the process of generating values based on a trained machine-learning model, given some new input data. The values or scores that are created can represent predictions of future values, but they might also represent a likely category or outcome.” When used vis-a-vis people, scoring is a statistical prediction that determines if an individual fits a category or outcome. A credit score, for example, is a number drawn from statistical analysis that represents the creditworthiness of an individual.
Supervised learning: ML systems learn how to combine inputs to produce predictions on never-before-seen data.
Unsupervised learning: Refers to training a model to find patterns in a dataset, typically an unlabeled dataset.
Training: The process of determining the ideal parameters comprising a model.
How do artificial intelligence and machine learning work?
Artificial IntelligenceArtificial Intelligence is a cross-disciplinary approach that combines computer science, linguistics, psychology, philosophy, biology, neuroscience, statistics, mathematics, logic and economics to “understanding, modeling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices.”
AI applications exist in every domain, industry, and across different aspects of everyday life. Because AI is so broad, it is useful to think of AI as made up of three categories:
- Narrow AI or Artificial Narrow Intelligence (ANI) is an expert system in a specific task, like image recognition, playing Go, or asking Alexa or Siri to answer a question.
- Strong AI or Artificial General Intelligence (AGI) is an AI that matches human intelligence.
- Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.
Modern AI techniques are developing quickly, and AI applications are already pervasive. However, these applications only exist presently in the “Narrow AI” field. Narrow AI, also known as weak AI, is AI designed to perform a specific, singular task, for example, voice-enabled virtual assistants such as Siri and Cortana, web search engines, and facial-recognition systems.
Artificial General Intelligence and Artificial Superintelligence have not yet been achieved and likely will not be for the next few years or decades.
Machine Learning is an application of Artificial Intelligence. Although we often find the two terms used interchangeably, machine learning is a process by which an AI application is developed. The machine-learning process involves an algorithm that makes observations based on data, identifies patterns and correlations in the data, and uses the pattern/correlation to make predictions about something. Most of the AI in use today is driven by machine learning.
Just as it is useful to break-up AI into three categories, machine learning can also be thought of as three different techniques: Supervised Learning; Unsupervised Learning; and Deep Learning.
Supervised learning efficiently categorizes data according to pre-existing definitions embodied in a labeled data set. One starts with a data set containing training examples with associated labels. Take the example of a simple spam-filtering system that is being trained using spam as well as non-spam emails. The “input” in this case is all the emails the system processes. After humans have marked certain emails as spam, the system sorts spam emails into a separate folder. The “output” is the categorization of email. The system finds a correlation between the label “spam” and the characteristics of the email message, such as the text in the subject line, phrases in the body of the message, or the email address or IP address of the sender. Using the correlation, it tries to predict the correct label (spam/not spam) to apply to all the future emails it gets.
“Spam” and “not spam” in this instance are called “class labels”. The correlation that the system has found is called a “model” or “predictive model.” The model may be thought of as an algorithm the ML system has generated automatically by using data. The labelled messages from which the system learns are called “training data.” The “target variable” is the feature the system is searching for or wants to know more about — in this case, it is the “spaminess” of email. The “correct answer,” so to speak, in the endeavor to categorize email is called the “desired outcome” or “outcome of interest.” This type of learning paradigm is called “supervised learning.”
Unsupervised learning involves having neural networks learn to find a relationship or pattern without having access to datasets of input-output pairs that have been labelled already. They do so by organizing and grouping the data on their own, finding recurring patterns, and detecting a deviation from the usual pattern. These systems tend to be less predictable than those with labeled datasets and tend to be deployed in environments that may change at some frequency and/or are unstructured or partially structured. Examples include:
- an optical character-recognition system that can “read” handwritten text even if it has never encountered the handwriting before.
- the recommended products a user sees on online shopping websites. These recommendations may be determined by associating the user with a large number of variables such as their browsing history, items they purchased previously, their ratings of those items, items they saved to a wish list, the user’s location, the devices they use, their brand preference and the prices of their previous purchases.
- detection of fraudulent monetary transactions based on, say, their timing and locations. For instance, if two consecutive transactions happened on the same credit card within a short span of time in two different cities.
A combination of supervised and unsupervised learning (called “semi-supervised learning”) is used when a relatively small dataset with labels is available, which can be used to train the neural network to act upon a larger, un-labelled dataset. An example of semi-supervised learning is software that creates deepfakes – photos, videos and audio files that look and sound real to humans but are not.
Deep learning makes use of large-scale artificial neural networks (ANNs) called deep neural networks, to create AI that can detect financial fraud, conduct medical image analysis, translate large amounts of text without human intervention and automate moderation of content of social networking websites . These neural networks learn to perform tasks by utilizing numerous layers of mathematical processes to find patterns or relationships among different data points in the datasets. A key attribute to deep learning is that these ANNs can peruse, examine and sort huge amounts of data, which enables them, theoretically, to find new solutions to existing problems.
Although there are other types of machine learning, these three – Supervised Learning, Unsupervised Learning and Deep Learning – represent the basic techniques used to create and train AI systems.
Artificial intelligence doesn’t come from nowhere; it comes from data received and derived from its developers and from you and me.
And humans have biases. When an AI system learns from humans, it may inherit their individual and societal biases. In cases where it does not learn directly from humans, the “predictive model” as described above may be biased because of the presence of biases in the selection and sampling of data that train the AI system, the “class labels” identified by humans, the way class labels are “marked” and any errors that may have occurred while identifying them, the choice of the “target variable,” “desired outcome” (as opposed to an undesired outcome), “reward”, “regret” and so on. Bias may also occur because of the design of the system; its developers, designers, investors or makers may have ended up baking their own biases into it.
There are three types of biases in computing systems:
- Pre-existing bias has its roots in social institutions, practices, and attitudes.
- Technical bias arises from technical constraints or considerations.
- Emergent bias arises in a context of use.
Bias may affect, for example, the political advertisements one sees on the Internet, the content pushed to the top of the pile in the feeds of social media websites, the amount of insurance premium one needs to pay, if one is screened out of a recruitment process, or if one is allowed to go past border-control checks in another country.
Bias in a computing system is a systematic and repeatable error. Because ML deals with large amounts of data, even a small error rate gets compounded or magnified and greatly affects the outcomes from the system. A decision made by an ML system, especially one that processes vast datasets, is often a statistical prediction. Hence, its accuracy is related to the size of the dataset. Larger training datasets are likely to yield decisions that are more accurate and lower the possibility of errors.
Bias in AI/ ML systems may create new inequalities, exacerbate existing ones, reproduce existing biases, discriminatory treatment and practices, and hide discrimination. See this explainer related to AI bias.
How are AI and ML relevant in civic space and for democracy?

The widespread proliferation, rapid deployment, scale, complexity and impact of AI on society is a topic of great interest and concern for governments, civil society, NGOs, human-rights bodies, businesses and the general public alike. AI systems may require varying degrees of human interaction or none at all. AI/ML when applied in design, operation and delivery of services offers the potential to provide new services, and improve the speed, targeting, precision, efficiency, consistency, quality or performance of existing ones. It may provide new insights by making apparent previously undiscovered linkages, relationships and patterns, and offer new solutions. By analyzing large amounts of data, ML systems save time, money and effort. Some examples of the application of AI/ ML in different domains include using AI/ ML algorithms and past data in wildlife conservation to predict poacher attacks and discovering new species of viruses.

The predictive abilities of AI and the application of AI and ML in categorizing, organizing, clustering and searching information have brought about improvements in many fields and domains, including healthcare, transportation, governance, education, energy, security and safety, crime prevention, policing, law enforcement, urban management and the judicial system. For example, ML may be used to track the progress and effectiveness of government and philanthropic programs. City administrations, including those of smart cities , use ML to analyze data accumulated over time, about energy consumption, traffic congestion, pollution levels, and waste, in order to monitor and manage them and identify patterns in their generation, consumption and handling.

AI is also used in climate monitoring, weather forecasting, prediction of disasters and hazards, and planning of infrastructure development. In healthcare, AI systems aid healthcare professionals in medical diagnosis, robot-assisted surgery, easier detection of diseases, prediction of disease outbreaks, tracing the source(s) of disease spread, and so on. Law enforcement and security agencies deploy AI/ML-based surveillance systems, face recognition systems, drones , and predictive policing for the safety and security of the citizens. On the other side of the coin, many of these applications raise questions about individual autonomy, personhood, privacy, security, mass surveillance, reinforcement of social inequality and negative impacts on democracy (See the Risks section ).

The full impact of the deployment of AI systems on the individual, society and democracy is not known or knowable, which creates many legal, social, regulatory, technical and ethical conundrums. The topic of harmful bias in artificial intelligence and its intersection with human rights and civil rights has been a matter of concern for governments and activists. The EU General Data Protection Regulation (GDPR) has provisions on automated decision-making, including profiling. The European Commission released a whitepaper on AI in February 2020 as a prequel to potential legislation governing the use of AI in the EU, while another EU body has released recommendations on the human rights impacts of algorithmic systems. Similarly, Germany, France, Japan and India have drafted AI strategies for policy and legislation. Physicist Stephen Hawking once said, “…success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.”
Opportunities
Artificial intelligence and machine learning can have positive impacts when used to further democracy, human rights and governance issues. Read below to learn how to more effectively and safely think about artificial intelligence and machine learning in your work.
Detect and overcome biasHumans come with individual and cognitive biases and prejudices and may not always act or think rationally. By removing humans from the decision-making process, AI systems potentially eliminate the impact of human bias and irrational decisions, provided the systems are not biased themselves, and that they are intelligible, transparent and auditable. AI systems that aid traceability and transparency can be used to avoid, detect or trace human bias (some of which may be discriminatory) as well as non-human bias, such as bias originating from technical limitations. Much research has shown how automated filtering of job applications reproduces multiple biases; however research has also shown that AI can be used to combat unconscious recruiter biases in hiring. For processes like job hiring where many hidden human biases go undetected, responsibly-designed algorithms can act as a double check for humans and bring those hidden biases into view, and in some cases even nudge people into less-biased outcomes, for example by masking candidates’ names and other bias-triggering features on a resume.
Automated systems based on AI can be used to detect attacks, such as credit card fraud or a cyberattack on public infrastructure. As online fraud becomes more advanced, companies, governments, and individuals need to be able to identify fraud even more quickly, even before it occurs. It is like a game of cat and mouse. Computers are creating more complex, unusual patterns to avoid detection, the human understanding of these patterns is limited— humans need to use equally agile and unusual patterns too, that can adapt and iterate in real time, and Machine Learning can provide this.
Enormous quantities of content uploaded every second to the social web (videos on YouTube and TikTok, photos and posts to Instagram and Facebook, etc.). There is simply too much for human reviewers to examine themselves. Filtering tools like algorithms and machine-learning techniques are used by many social media platforms to screen through every post for illegal or harmful content (like child sexual abuse material, copyright violations, or spam). Indeed, artificial intelligence is at work in your email inbox, automatically filtering unwanted marketing content away from your main inbox. Recently, the arrival of deepfake and other computer-generated content requires similarly advanced approaches to identify it. Deepfakes take their name from the deep learning artificial-intelligence technology used to make them. Fact-checkers and other actors working to diffuse the dangerous, misleading power of these false videos are developing their own artificial intelligence to identify these videos as false.
Search engines run on algorithmic ranking systems. Of course, search engines are not without serious biases and flaws, but they allow us to locate information from the infinite stretches of the internet. Search engines on the web (like Google and Bing) or within platforms (like searches within Wikipedia or within The New York Times) can enhance their algorithmic ranking systems by using machine learning to favor certain kinds of results that may be beneficial to society or of higher quality. For example, Google has an initiative to highlight “original reporting.”
Machine Learning has allowed for truly incredible advances in translation. For example, Deepl is a small machine-translation company that has surpassed even the translation abilities of the biggest tech companies. Other companies have also created translation algorithms that allow people across the world to translate texts into their preferred languages, or communicate in languages beyond those they know well, which has advanced the fundamental right of access to information, as well as the right to freedom of expression and the right to be heard.
Risks
The use of emerging technologies can also create risks in civil society programming. Read below on how to discern the possible dangers associated with artificial intelligence and machine learning in DRG work, as well as how to mitigate for unintended – and intended – consequences.
Discrimination against marginalized groupsThere are several ways in which AI may make decisions that can lead to discrimination, including how the “target variable” and the “class labels” are defined; during the process of labeling the training data; when collecting the training data; during the feature selection; and when proxies are identified. It is also possible to intentionally set up an AI system to be discriminatory towards one or more groups. This video explains how commercially available facial recognition systems trained on racially biased data sets discriminate against people of dark skin, women and gender-diverse people.
The accuracy of AI systems is based on how ML processes Big Data , which in turn depends on the size of the dataset. The larger the size, the more accurate the system’s decisions are likely to be. However, women, Black people and people of color (PoC), disabled people, minorities, indigenous people, LGBTQ+ people, and other minorities, are less likely to be represented in a dataset because of structural discrimination, group size, or attitudes that prevent their full participation in society. Bias in training data reflects and systematizes existing discrimination. Because an AI system is often a black box, it is hard to conclusively prove or demonstrate that it has made a discriminatory decision, and why it makes certain decisions about some individuals or groups of people. Hence, it is difficult to assess whether certain people were discriminated against on the basis of their race, sex, marginalized status or other protected characteristics. For instance, AI systems used in predictive policing, crime prevention, law enforcement and the criminal justice system are, in a sense, tools for risk-assessment. Using historical data and complex algorithms, they generate predictive scores that are meant to indicate the probability of the occurrence of crime, the probable location and time, and the people who are likely to be involved. When relying on biased data or biased decision-making structures, these systems may end up reinforcing stereotypes about underprivileged, marginalized or minority groups.
A study by the Royal Statistical Society notes that “…predictive policing of drug crimes results in increasingly disproportionate policing of historically over‐policed communities… and, in the extreme, additional police contact will create additional opportunities for police violence in over‐policed areas. When the costs of policing are disproportionate to the level of crime, this amounts to discriminatory policy.” Likewise, when mobile applications for safe urban navigation, software for credit-scoring, banking, housing, insurance, healthcare, and selection of employees and university students rely on biased data and decisions, they reinforce social inequality and negative and harmful stereotypes.
The risks associated with AI systems are exacerbated when AI systems make decisions or predictions involving minorities such as refugees or “life and death” matters such as medical care. A 2018 report by The University of Toronto and Citizen Lab notes, “Many [asylum seekers and immigrants] come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.” For medical and healthcare uses, the stakes are especially high because an incorrect decision made by the AI system could potentially put lives at risk or drastically alter the quality of life or wellbeing of the people affected by it.
Malicious hackers and criminal organizations may use ML systems to identify vulnerabilities in and target public infrastructure or privately-owned systems such as IoT devices and self-driven cars, for example.
If malicious entities target AI systems deployed in public infrastructure, such as smart cities , smart grids, and nuclear installations as well as healthcare facilities and banking systems, among others, they “will be harder to protect, since these attacks are likely to become more automated and more complex and the risk of cascading failures will be harder to predict. A smart adversary may either attempt to discover and exploit existing weaknesses in the algorithms or create one that they will later exploit.” Exploitation may happen, for example, through a poisoning attack, which interferes with the training data if machine learning is used. Attackers may also “use ML algorithms to automatically identify vulnerabilities and optimize attacks by studying and learning in real time about the systems they target.”
The deployment of AI systems without adequate safeguards and redress mechanisms may pose many risks to privacy and data protection (See also the Data Protection . Businesses and governments collect immense amounts of personal data in order to train the algorithms of AI systems that render services or carry out specific tasks and activities. Criminals, rogue states/ governments/government bodies, and people with malicious intent often try to target these data for various reasons ranging from carrying out monetary fraud to commercial gains to political motives. For instance, health data captured from smartphone applications and Internet-enabled wearable devices, if leaked, can be misused by credit agencies, insurance companies, data brokers, cybercriminals, etc. The breach or abuse of non-personal data, such as anonymized data, simulations, synthetic data, or generalized rules or procedures, may also affect human rights.
AI systems used for surveillance, policing, criminal sentencing, legal purposes, etc. become a new avenue for abuse of power by the state to control citizens and political dissidents. The fear of profiling, scoring, discrimination and pervasive digital surveillance may have a chilling effect on citizens’ ability or willingness to exercise their rights or express themselves. Most people will modify their behavior in order to obtain the benefits of a good score and to avoid the disadvantages that come with having a bad score.
Opacity may be interpreted as either a lack of transparency or a lack of intelligibility. Algorithms, software code, ‘behind-the-scenes’ processing and the decision-making process itself may not be intelligible to those who are not experts or specialized professionals. In legal/judicial matters, for instance, the decisions made by an AI system do not come with explanations, unlike those of judges which are required to state the reasons on which their legal order or judgment is based. The legal order or judgment is quite likely to be on public record.
Automation systems, including AI/ML systems, are increasingly being used to replace human labor in various domains and industries, eliminating a large number of jobs and causing structural unemployment (known as technological unemployment). With the introduction of AI/ML systems, some types of jobs will be lost, some others will be transformed, and new jobs will appear. The new jobs are likely to require specific or specialized skills that are amenable to AI/ML systems.
Profiling and scoring in AI raise apprehensions that people are being dehumanized and reduced to a profile or score. Automated decision-making systems may impact the wellbeing, physical integrity, quality of life of people, the information they find or are targeted with, the services and products they can or cannot avail, among other things. This affects what constitutes an individual’s consent (or lack thereof), the way consent is formed, communicated and understood, and the context in which it is valid. “[T]he dilution of the free basis of our individual consent – either through outright information distortion or even just the absence of transparency – imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation”. – Human Rights in the Era of Automation and Artificial Intelligence
Questions
If you are trying to understand the implications of artificial intelligence and machine learning in your work environment, or are considering using aspects of these technologies as part of your DRG programming, ask yourself these questions:
-
Is artificial intelligence or machine learning an appropriate, necessary, and proportionate tool to use for this project and with this community?
-
Who is designing and overseeing the technology? Can they explain to you what is happening at different steps of the process?
-
What data are being used to design and train the technology? How could these data lead to biased or flawed functioning of the technology?
-
What reason do you have to trust the technology’s decisions? Do you understand why you are getting a certain result, or might there be a mistake somewhere? Is anything not explainable?
-
Are you confident the technology will work as it is claimed when it is used with your community and on your project, as opposed to in a lab setting (or a theoretical setting)? What elements of your situation might cause problems or change the functioning of the technology?
-
Who is analyzing and implementing the AI/ML technology? Do these people understand the technology, and are they attuned to its potential flaws and dangers? Are these people likely to make any biased decisions, either by misinterpreting the technology or for other reasons?
-
What measures do you have in place to identify and address potentially harmful biases in the technology?
-
What regulatory safeguards and redress mechanisms do you have in place, for people who claim that the technology has been unfair to them or abused them in any way?
-
Is there a way that your AI/ML technology could perpetuate or increase social inequalities, even if the benefits of using AI and ML outweigh these risks? What will you do to minimize these problems and stay alert to them?
-
Are you certain that the technology abides with relevant regulations and legal standards, including GDPR?
-
Is there a way that this technology may not discriminate against people by itself, but that it may lead to discrimination or other rights violations, for instance when it is deployed in different contexts or if it is shared with untrained actors? What can you do to prevent this?
Case Studies
“Preventing echo chambers: depolarising the conversation on social media”“Preventing echo chambers: depolarising the conversation on social media”
“RNW Media’s digital teams… have pioneered moderation strategies to support inclusive digital communities and have significantly boosted the participation of women in specific country settings. Through Social Listening methodologies, RNW Media analyses online conversations on multiple platforms across the digital landscape to identify digital influencers and map topics and sentiments in the national arena. Using Natural Language Processing techniques (such as sentiment and topic detection models), RNW Media can mine text and analyze this data to unravel deep insights into how online dialogue is developing over time. This helps to establish the social impact of the online moderation strategies while at the same time collecting evidence that can be used to advocate for young people’s needs.”
In 2014, the International Center for Tropical Agriculture, the Government of Colombia, and Colombia’s National Federation of Rice Growers, using weather and crop data collected over the prior decade, predicted climate change and resultant crop loss for farmers in different regions of the country. The prediction “helped 170 farmers in Córdoba avoid direct economic losses of an estimated $ 3.6 million and potentially improve productivity of rice by 1 to 3 tons per hectare. To achieve this, different data sources were analyzed in a complementary fashion to provide a more complete profile of climate change… Additionally, analytical algorithms were adopted and modified from other disciplines, such as biology and neuroscience, and were used to run statistical models and compare with weather records.”
Doberman.io developed an iOS app that employs machine learning and speech recognition to automatically analyze speech in a meeting room. The app determines the amount of time each person has spoken and tries to identify the sex of each speaker, using a visualization of the contribution of each speaker almost in real time with the relative percentages of time during which males and females have spoken. “When the meeting starts, the app uses the mic to record what’s being said and will continuously show you the equality of that meeting. When the meeting has ended and the recording stops, you’ll get a full report of the meeting.”
Food security: Detecting diseases in crops using image analysis (2016)
“Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach.”
Can an ML model potentially predict the closure of civic spaces more effectively than traditional approaches? The USAID-funded INSPIRES project is testing the proposition that machine learning can help identify early flags that civic space may shift and generate opportunities to evaluate the success of interventions that strive to build civil society resilience to potential shocks.
References
Find below the works cited in this resource.
- Angwin, Julia et al. (2016). Machine Bias. ProPublica.
- Borgesius, Frederik Zuiderveen. (2018). Discrimination, artificial intelligence, and algorithmic decision-making. Council of Europe.
- Bostrom, Nick. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Council of Europe. (Adopted on 8 April 2020). Recommendation of the Committee of Ministers to Member states on the human rights impacts of algorithmic systems.
- Dada, Emmanuel et al. (2019). Machine learning for email spam filtering: review, approaches and open research problems. Heliyon 5(6).
- De Winter, Daniëlle, Lammers, Ellen & Mark Noort. (2019). 33 Showcases: Digitalisation and Development. Dutch Ministry of Foreign Affairs.
- Desierto, Diane. (2020). Human Rights in the Era of Automation and Artificial Intelligence. EJIL:Talk! Blog of the European Journal of International Law.
- European Commission. (2020). On Artificial Intelligence – A European Approach to Excellence and Trust. EJIL:Talk! Blog of the European Journal of International Law.
- Casteluccia, Claude & Daniel Le Métayer. (2019). Understanding algorithmic decision-making: Opportunities and challenges. European Parliamentary Research Service.
- Fang, Fei et al. (2016). Deploying PAWS: Field Optimization of the Protection Assistant for Wildlife Security. Proceedings of the Twenty-Eighth AAAI Conference on Innovative Applications (IAAI-16).
- Feldstein, Steven. (2019). How artificial intelligence systems could threaten democracy. The Conversation.
- Frankish, Keith & William M. Ramsey, eds. (2014). The Cambridge Handbook of Artificial Intelligence. Cambridge University Press.
- Friedman, Batya & Helen Nissenbaum. (1996). Bias in Computer Systems. ACM Transactions on Information Systems 14(3).
- Fruci, Chris. (2018). The rise of technological unemployment. The Burn-In.
- German Federal Government. (2018). Key Points for a Federal Government Strategy on Artificial Intelligence.
- Kassner, Michael. (2013). Search engine bias: What search results are telling you (and what they’re not). TechRepublic.
- Knight, Will. (2019). Artificial intelligence is watching us and judging us. Wired.
- Kumar, Arnab et al. (2018). National Strategy for Artificial Intelligence: #AIforAll. NITI Aayog.
- Lum, Kristian & William Isaac. (2016). To predict and serve?. Significance 13(5). Royal Statistical Society.
- Maini, Vishal. (2017). Machine learning for humans. Medium.
- Maxmen, Amy. (2018). Machine learning spots treasure trove of elusive viruses. Nature.
- Miller, Meg. (2017). This app uses AI to track mansplaining in your meetings. Fast Company.
- Mitchell, Tom. (1997). Machine Learning. McGraw Hill.
- Mohanty, Sharada P., Hughes, David P. & Marcel Salathé. (2016). Using Deep Learning for Image-Based Plant Disease Detection. Frontiers in Plant Science.
- Moisejevs, Ilja. (2019). Poisoning attacks on Machine Learning. Towards Data Science.
- Molnar, Petra & Lex Gill. (2018). Bots at the Gate. University of Toronto and Citizen Lab.
- Polli, Frida. (2019). Using AI to Eliminate Bias from Hiring. Harvard Business Review.
- Ridgeway, Andy. (2019). Deepfakes: the fight against this dangerous use of AI. BBC Science Focus Magazine.
- Russell, Stuart J. & Peter Norvig. (1995). Artificial Intelligence: A Modern Approach. Prentice Hall.
- Smith, Floyd. (2019). Case Study: Fraud Detection “On the Swipe” For a Major US Bank. mmSQL Blog.
- Stanford University. (2016). Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel.
- UN Conference on Trade and Development (UNCTAD). (2017). The Role of Science, Technology, and Innovation in Ensuring Food Security by 2030.
- World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). (2017). Report of COMEST on Robotics Ethics. UNESCO and COMEST.
Additional Resources
- A brief history of AI: provides a timeline of AI development.
- Access Now. (2018). Human Rights in the Age of Artificial Intelligence.
- AI Myths: website exploring misconceptions around AI.
- Anderson, Michael & Susan Leigh Anderson. (2011). A prima facie duty approach to machine ethics: machine learning of features of ethical dilemmas, prima facie duties, and decision principles through a dialogue with ethicists. In: Anderson, Michael & Susan Leigh Anderson, eds. Machine Ethics. Cambridge University Press, pp. 476-492.
- Awwad, Yazeed et al. (2020). Exploring Fairness in Machine Learning for International Development. USAID, MIT D-Lab and MIT CITE.
- Caulfield, Brian. (2019). Five things you always wanted to know about AI, but weren’t afraid to ask. NVIDIA.
- Commotion Wireless. (2013). Warning Labels Development Part 1 and Part 2.
- Comninos, Alex et al. (2019). Artificial Intelligence for Sustainable Human Development. Global Information Society Watch.
- Council of Europe Human Rights Channel. How to protect ourselves from the dangers of artificial intelligence.
- Molnar, Petra. (2020). The human rights impacts of migration control technologies. European Digital Rights.
- Elish, Madeleine Clare & Danah Boyd. (2017). Situating methods in the magic of big data and artificial intelligence.
- Eubanks, Virginia. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Macmillan.
- Fairness in Machine Learning and Health: website and resources on the 2019 conference.
- Feldstein, Steven. (2019). The Global Expansion of AI Surveillance. Carnegie Endowment for International Peace.
- Hager, Gregory D. (2017). Artificial Intelligence for Social Good. Association for the Advancement of Artificial Intelligence (AAAI) and Computing Community Consortium.
- Henley, Jon & Robert Booth. (2020). Welfare surveillance system violates human rights, Dutch court rules. The Guardian.
- Johnson, Khari. (2019). How AI can strengthen and defend democracy. VentureBeat.
- Latonero, Mark. (2018). Governing Artificial Intelligence: Upholding Human Rights & Dignity. Data & Society.
- Manyika, James, Silberg, Jake & Brittany Presten. (2019). What do we do about the biases in AI? Harvard Business Review.
- Mitchell, Melanie. (2019). Artificial Intelligence: A Guide for Thinking Humans. Macmillan.
- Müller, Vincent C. (2020). Ethics of Artificial Intelligence and Robotics. Stanford Encyclopedia of Ethics.
- Muro, Mart, Maxim, Robert & Jacob Whiton. (2019). Automation and artificial intelligence: How machines are affecting people and places. Brookings Institution.
- O’Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers.
- Paul, Amy, Jolley, Craig & Aubra Anthony. (2018). Reflecting the Past, Shaping the Future: Making AI Work for International Development. USAID.
- Polyakova, Alina & Chris Meserole. (2019). Exporting Digital Authoritarianism. Brookings Institution.
- Resnick, Brian. (2019). Alexandria Ocasio-Cortez says AI can be biased. She’s right. Vox.
- Sharma, Sid. (2019). What is conversational AI?. NVIDIA.
- Tambe, Milind & Eric Rice, eds. (2018). Artificial Intelligence and Social Work. Cambridge University Press.
- Technology and Human Rights: Series of articles on this relationship from OpenGlobalRights.
- Upturn and Omidyar Network. (2018). Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods.