A Primer on Machine Learning and Artificial Intelligence

Introduction 

Artificial intelligence is a very broad topic that includes machine learning and deep learning. These terms are often used interchangeably with the assumption that they are all the same topic. However, while the terms are related, there are specific characteristics that differentiate between them. Deep learning is actually a subfield of machine learning, which is a subfield of artificial intelligence. Artificial intelligence involves developing computers that are capable of mimicking human cognitive functions and following through with specific tasks. Machine learning uses algorithms to recognize patterns and trends from previous data, and then uses this information to make real-world applications. The whole goal of artificial intelligence is to allow computers to work independently, without the need for humans to instruct and interact with them. There is a large variety of applications for artificial intelligence and machine learning, ranging from essentially every industry.  Artificial intelligence is widely used in the manufacturing, banking, and healthcare industries. In this blog post, we will go deeper into the definitions of artificial intelligence and machine learning, and their practical applications.

What is Artificial Intelligence?

 There are many different ways to define artificial intelligence, and over the course of several years, the definition has changed drastically. Alan Turing, who is often referred to as the father of modern computer science created a test known as the Turing Test in an attempt to answer the question “can machines think?” In this test, a human has to differentiate between a computer’s response to a question and another human’s response to the same question (IBM).  Furthermore, in “Artificial Intelligence: A Modern Approach”, Stuart Russel and Peter Norvig discuss a human approach vs. a rational approach to artificial intelligence. They discuss four different goals to pursue when designing artificial intelligence: systems that think like humans, systems that act like humans, systems that think rationally, and systems that act rationally. Each method or goal has its own advantages and disadvantages, and all of these methods are used today. An overall definition for artificial intelligence, that fits into these different goals, is that artificial intelligence allows machines to learn from previous experiences and information, and perform human-like tasks (SAS).

Along with the general definition described above, artificial intelligence can also be differentiated into  weak and strong artificial intelligence. Weak artificial intelligence, also known as narrow artificial intelligence is artificial intelligence that is programmed and trained for one task. Narrow artificial intelligence can not mimic a human as a whole, but rather certain aspects, and has very specific applications. For example, narrow artificial intelligence is used in Amazon Alexa, Google Home, personalized advertisements on social media, recommended songs on Spotify, and so many more. 

Strong artificial intelligence, also known as artificial general intelligence, focuses on creating a machine that can perform any cognitive task that a human can. In other words, a machine that can mimic a human. There are three main tasks that are critical to making an artificial general intelligence machine. The first is the ability to generalize knowledge (being able to use knowledge from a different area) and apply it to an issue or task. The second task involves  the ability to make a prediction based on prior knowledge and experiences, while the third and final task  is the ability to adapt to changes (Forbes). Notably, there are a lot of ethical arguments that come along with artificial general intelligence, and it can be argued that it is impossible to make a “strong” artificial intelligence. 

Overall, artificial intelligence can be used to add intelligence to preexisting technologies. It can perform tasks reliably, with much less error than a human, and faster than a human. Artificial intelligence can also adapt through progressive learning. In the future, artificial intelligence may have even more of an impact on our everyday lives, and we can learn so much from it. 

Real-Life Use Cases for Artificial Intelligence

Daily Tech Use

Depending on how much tech you interface with, you may be thinking: “Artificial Intelligence isn’t used for anything I do or use. Why would I need to know where AI is used?” To answer the question quickly…artificial intelligence is currently embedded in a lot of daily tasks that most people (possibly even you!) use.

Whether you’re trying to find something via Google, trying to decide on what you’d like to watch on Netflix, or trying to discover niche music genres on Spotify, all of these sites use algorithms via AI in order to deduce what you’re probably interested in looking at. (University of York n.d.) For example…if you’re a STEM major who happens to search for the phrase “R Programming” enough, Google will eventually pick up that you are most likely not looking for the history of how the letter R came to exist. Likewise, if you’re a linguistics major looking for how the modern letter R came to exist, you will most likely not get search results related to the R programming language. Of course, this isn’t the only situation where two people will get radically different search results. In fact, Google’s algorithmic presentation of information based on what you typically look for has a name — “filter bubbles”. The term was coined over a decade ago by political activist Eli Pariser. He demonstrated this phenomenon in a 2011 TED Talk with two different people searching for “Egypt” around the same time. While the conversation was predominantly about how filter bubbles impact politics and activism, it should be noted that filter bubbles would not exist without artificial intelligence behind them.  This said, being aware of how AI algorithms can influence what you see is an important aspect of civic engagement. This concept may become even more pertinent as newer chatbots present further issues, such as giving false information when asked certain questions. Thus, the implementation of AI is important for everyone.

For a less ominous use of modern AI, there are also applications with handwriting recognition software. Even with written English, a touch-screen interface combined with AI, image processing, and computer vision to convert handwriting into text-compatible notes. This can be extremely useful for transferring text data from one computer to another. While you could take a photo of your notes for someone else to look at, this might have limited use for finding words within the text after the fact – you would not be able to search for a keyword if it was only saved as an image. Further, a computer that can convert handwriting to typed text also allows someone to use a search engine without typing. This use of AI even extends beyond the English language. Handwriting recognition research has been used for several different languages, including non-Western languages such as simplified Chinese, Arabic, Thai, and more. As a consequence, handwriting recognition AI can bypass the need to type (a skill that is separate from writing and is even less common). Further, converting from hand-written text to computer text formats is also applicable to these languages, which can be used for translation AIs – while things such as Google Translate may not be the most reliable, they can serve in a pinch in situations such as a hospital ER.

AI in Economics and Finance

Economics and Finance also embrace technology to carry out their work. For example, technology is particularly relevant to detecting credit card and insurance fraud.  There are well-established ways to use mathematics and statistics to determine if someone’s financial accounts have been compromised. However, the conundrum that comes with modern finance and economics is that transactions happen at far, far faster speeds than humans can currently keep up with.  An AI algorithm can calculate the probability that a financial transaction was fraudulent far faster than a human could. Therefore, as long as the humans behind the algorithm have given their AI formulas to work with, faster processing speed is of great assistance in preventing modern-day fraud.

Likewise, AI is already the cornerstone of the modern foreign exchange market (also known as FOREX). While the concept of foreign exchange has existed since Antiquity, there are some additional considerations in contemporary times. Specifically, modern currencies are traded in significantly larger amounts and at faster speeds than anything before. In fact, modern FOREX is so large and so fast that a human being cannot efficiently or consistently make profits without AI tools! This is predominantly due to the majority of FOREX transactions being carried out by AI bots instead of humans.  A study commissioned by JPMorgan in 2020 determined that about 60% of all FOREX transactions were made by AI rather than humans! This is not to say that human involvement in FOREX is non-existent. Instead, the human role of a FOREX trader is no longer in the realm of physically placing trades, but in examining formulas and creating better and better code that a FOREX AI bot will operate with. Essentially, AI frees up time for human financiers to make analytical decisions as opposed to physically waiting or physically making trades…if so inclined. It should be noted that these applications of AI are still new, and often come with the risk of sudden price shifts wiping out short-term profits. 

AI in Healthcare

Artificial Intelligence also has applications in healthcare. It might be odd to think about how AI would impact something as physical as your own body, but there are already several cases where it can be used. 

For example, AI can be used to detect lethal drug interactions and make vaccines from scratch. For the former, researchers at Pennsylvania State University used AI to study what prescription drug combinations could cause liver damage. In the case of the latter, in 2019 researchers at Flinders University in Australia developed the first flu vaccine that was completely designed by artificial intelligence. Previously developed vaccines have been partially designed by AI, giving precedence to the first 100% AI-made vaccine. Furthermore, AI is used in physical machines developed for medicinal purposes – namely, via Robot-assisted surgery. While most robotic surgical systems are not 100% AI-driven, the very first instance of a surgical robot doing surgery by itself was back in 2006 (United Press International 2006)! This isn’t a commonplace practice at the moment, but robot-assisted surgery with human intervention is. Hence, it is worth considering whether or not medical science should completely automate surgery altogether, or use AI-surgical robots as collaborative machines. 

What is Machine Learning?

Machine learning is a subset of AI specializing in taking data and improving the accuracy of predictions using that data. For example, if the temperature increased by one degree Fahrenheit every day, a machine learning algorithm could use that data to predict that the temperature would keep increasing by one degree per day. This is arguably the simplest form of machine learning, called linear regression (as there is a linear relationship between the number of days and the temperature). However, machine learning can encompass a number of different ideas and models, even including items such as weather forecasts. 

Machine learning is used in many ways throughout our everyday lives, such as for Spotify/YouTube recommendations, stock market predictions, and advertisements. With more data being readily available every day, the potential applications of ML will only continue to increase. Creative destruction, in economics, is the concept that with new and better technology, some jobs may be lost in the short run. However, in the long run, productivity will increase, new jobs will be created, and living standards will increase. With AI potentially taking over some jobs such as customer service jobs, and some of those jobs being replaced by jobs requiring the coding of AI tools, creative destruction is taking place and will only continue to do so. Therefore, with ML taking over a large portion of the Internet today, it is fundamental to obtain an in-depth understanding of what it does. 

Machine learning can generally work in two ways: supervised and unsupervised learning. With supervised learning, a computer is trained with labeled data and can then use that data to make new predictions. For example, if we wanted to train a computer to recognize a picture of an apple, we would first need to input a large number of pictures containing apples and pictures that do not have apples. Then, we would appropriately label them. The computer would then take this data, make a model out of it, and predict whether or not something is an apple from a new picture. Unsupervised learning is generally used to cluster or group segments of data. For example, Spotify could use this type of ML algorithm to group listeners into certain categories. One potential grouping of the listeners could be hip-hop and rap, enabling Spotify to suggest hip-hop artists to rap listeners and vice versa. 

Figure 1: Supervised vs. Unsupervised Learning (Yan et. al. 2018)

One way a computer can make a model is through reinforcement learning, which tells a computer to predict the future given the past. Going back to the apple example, the computer could start out by making random guesses on which pictures have apples and which do not. Then, the model would check the guesses against the data – if the guesses were off, the model would change to adapt. Each pass through the dataset (each time the model goes through the dataset and guesses which pictures have apples) is called an epoch. Eventually, after tens or hundreds of epochs, the model will get better and better. Ideally, a good model would be able to guess which pictures contain apples with close to 100% accuracy. 

Use Cases for ML: Sports Analytics

One example of machine learning in the real world is using the rushing yards over expectation (RYOE) metric in the NFL (National Football League). To calculate RYOE, developers can calculate the expected rushing yards given a few factors, such as the speed of defenders and the number of blockers in the area. Then, given the actual rushing yards that occurred, RYOE can be calculated as (actual yards) – (expected yards). Using new data and machine learning modules based on this metric, teams can better determine whether rushing yards are the products of running backs themselves or of offensive linemen and schemes. This also allows for quantitative comparisons related to the value of passing plays versus running plays, and subsequently where teams should invest personnel resources into. Thus, with the introduction of new data and machine learning models applied to that data, we are able to make a cohesive argument to finally answer the question: do running backs really matter?

Another use of machine learning is in sports betting. By analyzing previous historical data with player ratings, injury history, and various other metrics, betting companies and bettors can use this to train a machine-learning model. By plugging in the current values of those metrics, the model is able to predict, for example, who will win a game and by how many points. By doing this, betting companies can set betting lines for games, and if the models of bettors do not align with this, the bettors may believe that their model is better and use that to bet on the game.

Furthermore, machine learning can be used to analyze game-time decisions in sports such as baseball and basketball. By looking at player performance in the past and seeing how they perform compared to other players in specific situations, such as in the rain or sun, teams can utilize machine learning to predict how players will perform in the future. Given this data, they can put their players in the best possible position to succeed.

Conclusion

In essence, it can be noted that Artificial Intelligence and Machine Learning are deeply interrelated concepts. This is especially true when Machine Learning is a subset of the broader AI field itself. Further, both broader AI and more specific Machine Learning techniques have applications ranging from entertainment such as sports and music, to daily living tasks just as hand-writing recognition and home assistant devices, to critical infrastructure such as finance and medicine. This leads one to ask where artificial intelligence is not implemented yet. While it can be hard to say when tech experts in academia and the private sector cannot come to a consensus, there is one thing that is absolutely certain. AI and Machine Learning carries least some importance to everyone’s lives in one way or another, whether directly or indirectly. 

Further, This also leads to further discussions, such as “is the importance of these technologies overstated or understated?”, as the exact magnitude to which artificial intelligence and machine learning will impact society is still unknown. With the introduction of machine learning chatbots such as ChatGPT, it can be challenging to ascertain how useful it will be in the long run. While it can answer questions from “Where was Abraham Lincoln killed?” to “Code a website for me”, it fails to answer some simple logical questions from time to time. Although the tool has been trained on an astounding three billion words, it’s far from perfect at this time. However, as time goes on, ChatGPT and similar tools will be trained on even more data, computers will become even faster, and the applications and accuracy will only increase – leaving us to wonder if future applications will be indistinguishable from humans. Similar to our previous example of robotic surgeons, time will only tell if AI and ML-powered chatbots will require extensive assistance from humans or if they will be capable of being autonomous in the future. While we cannot answer this question at this time, nor do we encourage a specific stance on artificial intelligence and machine learning… we can say that it is a topic to keep an eye on.

Works Cited

For a list of references, please use this link: http://bit.ly/3GBKGof

This blog post was written by William-Elijah Clark (Senior STEM Data Fellow), Sahil Chugani (STEM Data Fellow) and Reagan Bourne (STEM Data Fellow) from FSU Libraries.

Leave a Reply

Powered by WordPress.com.

Up ↑