Nathan's Notes
The Explore vs. Exploit Dilemma
Multi-armed bandits and a prolonged analogy
Oct 30, 2024I often analogize real-world problems according to their ML-related counterparts. One of them is the exploration-exploitation problem, but it’s often been met with minimal recognition. I wrote this blog as something I can refer to the next time I get a “what do you mean?”
Introduction: The Multi-Armed Bandit Problem
Suppose we have a series of decisions to make, each with the potential to yield a reward. In our multi-armed bandit problem, we aim to develop a strategy to maximize this reward over time. We envision each “arm” as a slot machine, each one hiding a different reward distribution. Our task is to identify which arm to pull at each step in time to accumulate the most reward.
If we consider as our starting state—knowing nothing about the reward distributions—and as the ideal state—where we have complete knowledge of the best arm—then we can define a function between ignorance and an optimal selection. In this framework, we can imagine a vector field guiding us from exploring new arms to exploiting the most rewarding ones.
Our state of knowledge at time can be denoted , representing our expected reward for each arm, updated after each trial. We can define the expected reward flow as:
where is an exploration parameter that decreases over time, is the action-value function for each arm (capturing the expected reward given our current understanding), and is a measure of the uncertainty or unexplored potential for each arm.
In early steps, , our strategy should be primarily exploratory: and thus . On the contrary, as approaches 1, should approach 0, directing toward the maximum expected value action, or exploitative policy. To represent this transition, we can set , where is a decay constant that controls how quickly we shift from exploration to exploitation.
The derivative of this function defines a policy gradient, guiding our choice of action. Instead of learning the policy directly, we train a forward dynamics model to predict rewards given our current knowledge, aiming to maximize:
where is the discount factor controlling the impact of future rewards, and is the observed reward at each time step . This function captures our goal to maximize cumulative rewards while balancing exploration and exploitation over time.
The model’s predictions thus guide us to iteratively pull arms that will yield the highest cumulative rewards, refining our understanding of each arm’s distribution and honing in on optimal actions.
The Forward Dynamics Model
In the multi-armed bandit problem, a forward dynamics model is an auxiliary model that predicts the expected reward for each arm based on past actions and observed rewards. By approximating the environment’s response to each action, helps us make informed choices, directing our strategy toward higher rewards.
1. Defining the Forward Dynamics Model
The forward dynamics model can be structured as a parameterized function with weights , taking in a feature vector (representing each arm’s current state) and outputting a predicted reward . The model aims to approximate the action-value function , which maps each action (arm) to an expected reward. In essence, estimates:
where is the actual reward received from pulling the arm represented by . Training involves refining this approximation over multiple trials, improving its ability to generalize from limited samples.
2. Training Objective
To train , we minimize the error between predicted rewards and observed rewards . The model is typically trained by minimizing the mean squared error (MSE) over all arms and trials:
where is the number of training samples, each associated with an action (one of the arms) and an observed reward . Minimizing this loss function encourages to make accurate reward predictions, allowing the model to distinguish more promising arms from less rewarding ones.
3. Data Collection Strategy
Training the forward dynamics model requires collecting data on observed rewards from pulling each arm. However, the exploration-exploitation dilemma influences how we gather this data:
Exploration Phase: During exploration, the model pulls different arms at random to collect a diverse dataset of reward outcomes. This phase ensures that samples from a wide range of potential rewards, even from suboptimal arms, capturing enough data to model the environment’s variability.
Exploitation Phase: As the model shifts toward exploitation, it begins to pull the arms predicted to yield higher rewards based on ’s predictions. During this phase, refines its understanding of the best arms and focuses on modeling the nuances in expected rewards for these higher-value actions.
4. Incorporating Reward Predictions into Policy Gradients
Once trained, ’s reward predictions are used to inform the policy and guide our decision-making. The policy gradient objective, which maximizes cumulative rewards, can be modified to include predicted rewards:
where is the discount factor for future rewards, is the action-state at time , and provides the reward prediction. This objective encourages the model to pull arms with higher predicted rewards, continually adjusting based on new data from .
Ultimately, the forward dynamics model enables a structured approach to decision-making by predicting rewards for each arm, balancing exploration and exploitation based on estimated reward distributions. With ongoing training, adapts to new information, ensuring the model’s predictions remain accurate as we refine our understanding of each arm. This iterative learning process helps maximize cumulative rewards by aligning actions with ’s growing knowledge of the environment.
The Exploration-Exploitation Dilemma
The exploration-exploitation dilemma is central to multi-armed bandit problems: we need to explore enough to understand each arm’s potential while also exploiting known information to maximize immediate rewards. The parameter , which governs the rate at which we shift from exploration to exploitation, plays a key role in this balance.
Choosing is a careful process. If is too large, decreases rapidly, leading the model to quickly favor exploitation. In this case, we may prematurely commit to arms that appear optimal based on limited early data, potentially missing out on higher-reward options that remain unexplored. On the other hand, if is too small, remains high, and we may spend too much time exploring, failing to capitalize on accumulated knowledge to maximize rewards.
To choose an optimal , consider the following factors:
Expected Variance Across Arms: If the rewards from each arm vary significantly, then a smaller is generally favorable because it allows more exploration. This ensures that our policy gathers enough information to accurately assess the best arm. On the contrary, if we expect rewards to be fairly consistent across arms, we may prefer a larger , quickly shifting to exploitation since exploration is less likely to yield drastically different results.
Risk Tolerance: If our strategy prioritizes high-risk, high-reward outcomes, a smaller allows more exploration, potentially discovering arms with rare but substantial rewards. Conversely, if the strategy is risk-averse, a larger may be preferable, reducing the time spent on uncertain options and focusing on the most reliable rewards found early on.
Adaptive Adjustments: In complex environments, an adaptive approach to can be beneficial. Here, might dynamically adjust based on observed reward distributions or changes in the variance of . For instance, if new arms produce high variance in , could decrease temporarily to encourage exploration; once variance stabilizes, can increase to favor exploitation.
There are also added considerations, such as the total number of trials and the decay function used to set . Depending on how critical initial exploration is or how smooth one wants to transition between exploration and exploitation, we may choose a different decay function, such as exponential or reciprocal. If is large, we can afford to spend more time exploring, and thus a smaller is suitable. However, if is limited, we should increase to favor quicker convergence toward exploitation.
Overall, the goal is to maximize cumulative reward over time by balancing exploration and exploitation. The parameter determines this balance by controlling the rate of decay in the exploration factor . Experimenting with different values of and decay functions is often necessary to find a balance that aligns with the specific requirements of the problem.
By fine-tuning and , we guide the model toward the most rewarding actions while avoiding premature convergence on suboptimal options.
The Prolonged Analogy
I find the many people I know go through the exploration-exploitation dilemma. No one wants to be Esther Greenwood, but neither does anyone want to dive completely into some repetitive FANG lifestyle, living somebody else’s dream before they’ve even discovered their own. How do you choose when to explore and when to exploit?
Surely, we should have an adaptive that depends on our surroundings. How does your change as all the people around you found B2B SaaS startups (no shade, simply not my kind of intellectual stimulation), go into finance, and marry college lovers? Can I bet on this arm and go all in, or should I hedge my bets and let my forward dynamics model better-model the environmental variability?
Uncertainty can be modeled as
where is the number of ensemble models and is the reward prediction from forward dynamics model . We unfortunately do not have the benefit of an ensemble, but we can utilize people who are close to us (and hopefully similar in thought process) to provide a proxy for uncertainty.
I have done a lot of exploring all throughout high school, studying many different scientific subjects for various olympiad competitions, working in various research groups, and having a general exposure to what the final exploitative policy would look like for many possible arms, whether SWE, quant, or academia. Additionally, I feel that exploration is a part of my reward function. An entropy parameter.
So, culminating everything, my dream would be industry research or founding a successful research-heavy startup. You don’t just have a sizeable salary and prestiege to move into any other field you are interested in, but also are continually growing your personal capital through exploration. I also have less uncertainty than most, given that most people I know believe I can do industry research or tackle difficult deeptech problems within a startup. Above all, I am building towards a relatively high risk tolerance — I am okay if a startup fails (it won’t). Regardless, I have a pretty long-range and life offers a relatively high expected variance across its arms. So I won’t pigeonhole myself into one future just yet — life offers a lot to learn, and to some extent, I suppose I am exploiting its possibilities to explore.
I was going to make up an extremely parameterized decay function for my , but the truth is, it’s probably something like
Thanks for reading!