Game Theory, Cryptography, and Artificial Intelligence: A Comprehensive Guide

Navein Suresh
9 min readJun 26, 2021

Navein Suresh & Keshav Shah


Machine Learning and all of its incredible capabilities stand stagnantly on their own, but what if there was a way to connect the way we secure personal data with machine learning architectures. Through this article we want to explore the intersection between the fields of cryptography and game theory and how they can be used in the world of machine learning. We first introduce game theory and cryptography and then delve into a summary of machine learning and how it can be used for predictions and labeling of complex data.

Brief Overview of Game Theory

Let’s begin by understanding what game theory is. Game theory is a branch of mathematics which is all centered around the study of strategic interactions during a “game”. We make a couple of key assumptions about this “game”:

  • There is a certain structure and established set of rules
  • Individuals participating in the game make rational decisions (pursue the best possible strategy)

For the most part, at least for our purposes, games can either be categorized as sequential or simultaneous. Sequential games are when players make their decision at the same time and the results are represented using a pay-off matrix. On the other hand, simultaneous games involve players making their decisions at the same time and possible results are portrayed using a decision tree. There are a vast variety of executable strategies depending on the specific problem, prominent examples including Pure and Mixed Strategy Nash Equilibriums, Dominant Strategies, and more.

A dominant strategy in game theory is where regardless of what other players in the game choose, your decision will prevail and lead to a better outcome. There are not necessarily dominant strategies for every game.

Nash Equilibrium is another particular area of interest when in Game Theory. Nash Equilibrium for a particular provides a set of strategies such that no player has any incentive to switch his/her strategy. John Nash proved that every finite game (Finite number of strategies and finite number of players) has at least one Nash Equilibrium. Each outcome must be checked (in a pay-off matrix) to ensure that each individual is satisfied with his/her choice. Nash Equilibriums come in two forms: Pure Strategy N.E. and Mixed Strategy N.E. One of these two strategies must exist.

Nash Equilibrium situation demonstrated using a table

Pure Strategy Nash Equilibrium is where an individual has a 100% chance (definite) of choosing a particular strategy such that the individual has no regrets after other players make a decision Rather, in a Mixed Strategy Nash Equilibrium, an individual is presenting probability distributions of specific courses of actions, ensuring that the situation is stable and the other player can choose a strategy without 100% certainty of loss.

One important term that’s introduced in game theory is the “zero-sum” game. Zero-sum games are defined in a way such that whenever an individual wins, the rest of the individuals (whether that be 1 more person or a 100 more) lose. The net change is 0 in this case. Nash Equilibrium Strategies are one way to solve “zero-sum” games. Real-life examples of “zero-sum” games include chess and rock-paper-scissors-shoot. When one individual tries to maximize his or her reward, the other player’s reward is necessarily minimized.

The Minimax theorem is closely linked with zero-sum games. The idea behind the minimax theorem deals with maximizing the utility/gain that when dealt with the worst possible scenario. The formal definition mathematically is stated below:

Minimax Formula

When applying to zero-sum games, it is the equivalent of establishing a Nash Equilibrium (in order to create a stable situation without any regret associated with any of the players).

Introduction to Cryptography

We can now move onto Cryptography. First, let us understand, what is cryptography in the broad sense? Well, cryptography is simply a method of protecting data and information by using specific codes so that information can only be accessed by the person it is meant to be accessed by.


The process of using ciphertext indistinguishability is used in the backgrounds of many encryption systems in the real world to prevent cyberattacks from occurring. The idea behind these systems is that an external source cannot encrypt ciphertexts even if they contain specific key words or text within a message. Now let us discuss what this truly means in the broad sense of encryption.


Encryption itself is a subsection of cryptography that involves the scrambling of data in order to give information in a safe manner where only authorized personnel can decrypt data and uncover the message behind the encryption. Encryption itself is done with something called an encryption key: a mathematical relationship used between the sender and receiver of the data that is unusable by external sources so that they can safely encrypt methods in a one-way path.

Encryption keys cannot be cracked unless a brute-force (guessing) method can calculate the encryption key using guess and check methods. This is very unlikely as there are millions of combinations of encryption keys which would render it nearly impossible for brute-force methods to accurately guess.

Types of Encryption

Symmetric Encryption: Only one key is used for both encryption and decryption. This obviously depends on both parties having access to the key, and both parties must keep enough security so that there cannot be a third party who can rendezvous with the encryption key and take a hold of the transfer of data between the intended 2 parties.

Asymmetric Encryption: This type of encryption involves distinct keys by both the sender and the receiver in order for data to both be encrypted and decrypted. This means the sender and receiver both use 2 different types of keys in order for this process to occur. This type of encryption is also known as public key encryption as this involves the usage of public key private key pairings which basically deliberates that data encrypted by private key can only be decrypted by a public key and vice versa.

Zero Knowledge Encryption

Zero knowledge encryption involves using mathematical techniques in order to verify attributes of data or knowledge without having to reveal the underlying data or sources used for this verification. This encryption can be heavily implemented into the real world as monetary transactions commonly use this technique as they simply can’t access the balance of someone’s credit card and complete a payment without having to access other related information connected to one’s credit card. Zero knowledge encryption is a completely probabilistic assessment so that means it does not have certainty, but rather it analyzes small pieces of unlinked data in order to prove what can be probable.

A.I. In a Nutshell

Artificial Intelligence can be defined by the notion of giving computers characteristics of humans that cannot be replicated with normal computer science techniques. These include but are not limited to, logical reasoning, creativity, decision-making, language, and social skills. These attributes are passed on to computers using machine learning.

Machine Learning

Machine Learning is a pipeline in which data (both inputs and outputs) is combined with a specific algorithm in order to create many inferences on the data that can be in the form of predictions (supervised ML), patterns (unsupervised ML), or decisions (RL)

Supervised Learning: Given data that is labeled, the goal is to predict the labels of unlabeled data. In other words, it is machine learning with a known outcome, where the best algorithm to reach that outcome is being determined. (Regressions)

Unsupervised Learning: Given a data set, find structures and patterns. The outcome or end goal is not known, and algorithms are created to summarize and group data.(Clustering)

Reinforcement Learning: Here the agent learns from its environment. Through trials it explores the full range of possibilities to determine the ideal behavior to maximize the reward and minimize risk. Input state is observed by the agent.

  1. Decision making function is used by an agent to perform an action.
  2. Agent receives reward or reinforcement/feedback from the environment.
  3. The action and state information regarding the reward is saved.

Leading from the types of ML, we get into features, labels, and models. Features are simply the input x variables in our data, labels are our output or y variables from the data, and models are simply the relationship that is formed between both x and y. This leads to the difference between classification and regression, two basic types of relationships that can be made between x and y.

Classification vs Regression

Regression is used for supervised machine learning models where the data is prelabeled, so for instance predictions of future house prices based on how old a house is would be a proper application of using regression. On the other hand, classification is putting data into a certain category based on probability. Classification uses logistic regression curves in order to classify data of 2 or more classes so that the model can best match it to where it belongs on a sigmoid curve. Classification simply outputs a categorical style of output, an example being to check if an email is spam. Obviously, every email on the internet cannot be labeled as spam or not, so emails can be classified using some of the words used within each email, and then the machine learning model can create a probability curve based on the likelihood of a specific email being spam or not.


These basics of the machine learning pipeline express how data is not only used for input and output, but it can be used in the broad sense in order to make justifications based off of the patterns that surround the data. This allows computers to make inferences that are at a much higher level then a simple computer program returning an output that is pre-programmed, hence the name artificial intelligence is justified.

How to These Fields Intersect?

GANs (Generative Adversarial Networks), reinforcement learning, multi-agent AI systems, etc.… have vast applications of Game Theory. In fact, a lot of the online games played today are heavily influenced by both these areas of study. Regulating Traffic with AI-powered self driving cars. When these cars are thought of and represented using game theory tools, the situation becomes manageable. However, without game theory representation, each car makes a decision that is not necessarily beneficial for the entire traffic congestion, causing potential problems. Thus, when employing tools such as Nash Equilibrium we are able to find a solution where each player (in this case cars) has a pathway out of the traffic jam where that car would not otherwise decide to change its decision. In doing so, the optimal algorithm can be employed, eventually dispersing the traffic.


Fields that traditionally were considered highly abstract and isolated are now being brought together to solve modern-day problems. What’s even more inspiring and amazing is that there is active research going on in all these areas, bringing us closer and closer to making our lives better and safer.

“The only thing greater than the power of the mind is the courage of the heart”

— John Nash

John Nash, American Mathematician and Nobel Prize Recipient