Machine Learning
The Basic Idea
Like it or not, our daily lives are filled with technology. Like most people, you likely began your day by checking your phone, or turning on a Spotify playlist to jumpstart your day. After getting up, maybe you checked Google Maps to see if it was going to be a busy commute to work. During your lunch break, you may have seen an advertisement for a suspiciously perfect shirt on social media. After heading home for the day, did you turn on your new, favorite Netflix show to relax?
Sound familiar? These technological habits are common and pervasive in most of our daily lives. However, none of these small technological wonders would be possible without one powerful tool: machine learning.
Machine learning is the process of a computer attempting to learn from past experiences. The general process of machine learning occurs as follows: data is inputted into a machine, which gets passed through an algorithm. If the algorithm returns the correct result, we use that correct result as an example for further learning. If it is wrong, we provide feedback and the machine learns from its mistakes. The more times it does this, the better it gets at solving the problem at hand.
To make understanding this process more simple, we can compare it to studying for a set of problems on a math test. If you solve it correctly, you have likely followed the correct steps to solve the problem. If you get it wrong, your teacher will provide feedback and show you where you went wrong. If you implement this feedback and continue to practice, the number of errors will minimize and you will be more likely to get a good grade. It is important to note that the increase in success does not come from practicing the exact same question repeatedly, but from executing an algorithm (set of steps) that can be applied across many different and difficult problems. While studying can create a math whiz, machine learning can make a computer highly accurate, effective, and quick at solving problems.
Theory, meet practice
TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.
Key Terms
Neural Networks: An interconnected, artificial model designed to mimic how neurons interact with each other.
Supervised Learning: Learning through feature association. In essence, it is teaching a machine to learn by providing it with a clear, concrete example.
Unsupervised Learning: The process of providing a machine with an unlabelled dataset and having the computer draw inferences from given parameters.
History
Interestingly, early work on machine learning did not come from the fields of mathematics or computer science, but from psychology. In 1949, Canadian psychologist Donald Webb wrote that when we learn something new, the neurons in our brains connect and build up a neural network. The more often that the new information or skill is repeated, the stronger the connections between the neurons become. This theory provided the foundation for modern machine learning.
Ten years later, a computer scientist named Arthur Samuel coined the term “machine learning.” He did this after building an intelligent system that could play checkers, which he engineered to have the ability to learn through an intricate scoring system. Each time a move was made, the program would assess the probability of winning based on the position of the pieces. The more the game would play, the better it would get at predicting these permutations.
Within the following ten years, the developments of Hebb and Samuel were applied to image recognition. The Perceptron, built-in 1957, was one of the inventions that resulted from this. As the first software program designed to recognize objects, the Perceptron was a promising start on the road towards machine learning. However, it was only semi-functional, as it could only learn simple objects and struggled with the details of more complex objects, such as faces.
Despite this beginning, quite a few effective algorithms were discovered shortly after the Perceptron, rapidly transforming our ability to recognize objects. At the same time, researchers began to layer neural networks on top of each other, which laid the groundwork for deep learning. They also learned how to do backpropagation, which is the process of adjusting neurons to fit novel situations.
Until the 1970s, artificial intelligence was synonymous with machine learning. Like machine learning, artificial intelligence’s history can be tied back to Hebb, Samuels, and other machine learning pioneers. In the past, both fields heavily utilized neural networks and complex mathematical logic to build their models. However, the two fields began to diverge in the 1970s, where machine learning began to utilize algorithms, and artificial intelligence abandoned neural networks to focus more on math. By doing this, the two fields became distinct but remain to this day closely linked.
After this split, machine learning began to take on a new route. Focusing more on probability and statistics, the industry prioritized solving practical, business problems, while narrowing in on building effective neural network models. With the internet beginning to take off in the 1990s, the vast amount of widely available data and online services made machine learning applications explode in popularity.
Consequences
Machine learning has come to transform nearly every aspect of our digital lives. We have replaced our old, low-tech appliances with newer smart fridges, automated assistants, and eventually self-driving cars. As time progresses, this move towards “smarter” technology will only quicken. Soon, there may be little to no industries untouched by machine learning. These innovations aren’t just limited to personal conveniences: healthcare, environmental, elder, security, and policy are all being shaped by machine learning’s powerful applications. It appears that in order to compete and thrive in modern society, grappling with the future implications of machine learning technologies is a must.
The most important aspect of machine learning is the ability to scale change quickly. As our world becomes more engulfed in data, the ability to accurately sort through that data will be key in developing insights into emergent fields and better understand the problems we face. Machine learning, as it stands, is our best tool for doing so. Capturing the power of machine learning is essential for any business, government, or institution attempting to project data-based insights into the future. Ideally, this ability that machine learning affords will help us forecast risk, as well as make faster breakthroughs that can propel us forward.
Machine learning has also transformed how we understand ourselves. By simulating the human learning process in machines, we have gained significant insights into how our brains operate. For example, recent developments in machine learning have helped us build deeper insights into diagnosing and potentially treating dyslexia. Dyslexia, which is a learning disorder that affects individuals’ ability to read and write, has been a mystery to scientists for quite some time. Currently, there researchers do not have a single testing method that accurately diagnoses dyslexia, and current methods boil down to inductive reasoning from various different factors.
Children with dyslexia struggle to associate specific letters with their sounds, leading to confusion in spelling and reading. Given that dyslexia essentially boils down to the brain failing to learn these associations, machine learning, which is designed to learn through association, can provide an effective model of the dyslexic brain. Furthermore, machine learning’s classification methods may allow us to accurately diagnose dyslexia early on through unsupervised learning methods. To this end, using machine learning and neural networks may provide a clear model for how to tackle diagnosing learning disabilities in the future.2
Controversies
As stated above, machine learning is the process of teaching machines how to learn like humans. Unfortunately, humans aren’t perfect. We are littered with biases and prejudices, and because we teach machines to think like humans, we often program these human biases into them. Machine learning algorithms have consistently come under such scrutiny for being discriminatory, non-inclusive, or inaccurate towards racial or gender minorities.
Examples of bias in machine learning are abundant, even at the heights of the industry. In 2014, Amazon sought to automate their hiring process, so they built an AI system that used machine learning to review job applicant’s resumes. After a year of testing, Amazon had to throw the whole system out, as it had internalized the patriarchal preferences that exist in society today and was discriminating against women. Resumes that included female names or associations with the word “women” in it, were automatically penalized.
In 2016, Microsoft also attempted to use machine learning to build a chatbot, which sourced its data from Twitter to learn how to communicate more effectively. In less than a day, the online ecosystem turned the AI into a bigot, consistently lobbing insults at marginalized groups and spouting fascist bile.
In 2019, Facebook also had its own machine learning scandal. At the time, their advertising platform, Facebook Ads, used intentional targeting based on gender, race, and religion. When it came to the job market, they found that their system was targeting women with traditionally feminine jobs, like secretary work or nursing. On the other hand, it targeted minority men with industries like janitorial work or taxi driving.
Clearly, machine learning and artificial intelligence have a bias problem. But why is this the case? First off, machine learning operates based on large swaths of data, which can often be skewed towards white and male demographics. For example, the benchmark dataset for facial recognition skews 70% male and 80% white. With a dataset like this being used in machine learning that then is used on the general population, facial recognition technologies are going to be ignorant. Often, these algorithms aren’t necessarily designed with malice, but in ignorance of diversity or the experiences of others. In order to combat these issues, it is important to include more equitable datasets, improve diversity in the tech community, and effective debiasing strategies should go a long way.
Related TDL Content
Machine Learning And Personalized Interventions: David Halpern
Interested in how machine learning can tie into behavioral science more concretely? In this podcast, The World Bank’s Jakob Rusinek interviews David Halpern, CEO of the Behavioural Insights Team about the future of behavioral science, and how machine learning, artificial intelligence, and personalization are shaping the future of behavioral science.
Interested in understanding more about how we learn? If you want to learn more about neural networks, the history of machine learning, or simply how our neurons work, then this piece on Hebbian Learning is for you.
Sources
- Foote, K. D. (2019, March 13). A brief history of machine learning. DATAVERSITY. https://www.dataversity.net/a-brief-history-of-machine-learning/#.
- McFadden, C. (2020, December 2). Machine learning might be the future of dyslexia diagnosis. Interesting Engineering. https://interestingengineering.com/machine-learning-might-be-the-future-of-dyslexia-diagnosis.
- Dilmegani, C. (2021, August 9). Bias in AI: What it is, types & examples of bias & tools to fix it. AIMultiple. https://research.aimultiple.com/ai-bias/.
- Nouri, S. (2021, February 3). Council post: The role of bias in artificial intelligence. Forbes. https://www.forbes.com/sites/forbestechcouncil/2021/02/04/the-role-of-bias-in-artificial-intelligence/?sh=1751699e579