Get in Touch
About

Training Artificial Intelligence to Play The Beer Distribution Game

A brief introduction to agent-based modeling and reinforcement learning, applied to the Beer Distribution Game
  • Oliver Grasl
    Oliver Grasl
    Sunday, May 31, 2020
A brief introduction to agent-based modeling and reinforcement learning, illustrated using the Beer Distribution Game
The infamous Beer Distribution Game was originally developed in the late 1950s by Jay Forrester at MIT to introduce the concepts of dynamic systems, in this case, the dynamic system is a supply chain that delivers beer from a brewery to the end consumer. What makes the game so intriguing is that the structure of the supply chain and the rules of the game are simple, yet the resulting behavior is quite complex.
The Beer Distribution Game is well documented:
We have collected all our beer distribution game resources in our case study Play The Beer Distribution Game.
Agent-based modeling is an approach to building simulation models of real-world systems. In agent-based modeling, entities are captured by agents, which show behavior in reaction to internal or external events. We will use agents to represent the players in the beer distribution game.
Once we have the agents, we can investigate different ordering strategies and try to find one that enables the agents to play the beer distribution game and optimize game performance.
We have set up a GitHub repository for the Beer Distribution Game that contains all the simulation models and some Jupyter Notebooks that explain how to use them. In particular, there is a notebook An Agent-based Approach to Modeling The Beer Distribution Game that walks through the various stages of the simulation.
But using the agents we can do something even more interesting: instead of giving the agents a set of predefined behaviors, we can train the agents to play the beer distribution game using machine learning techniques, in particular a technique known as Reinforcement Learning.
Reinforcement Learning is a machine learning technique that focuses on learning from interaction. The machine is not told which actions to take but must discover which actions have the best effect by trying them.

A Brief Introduction to The Beer Distribution Game

To understand the challenge of managing the beer distribution supply chain, let us take a more detailed look at its structure: the supply chain leads from the brewery, via a distributor and wholesaler to the reseller, who sells beer to his customers, the end consumers.
The objective of the game is to ensure that the consumer's demand for beer can be met directly or at least with as small a delay as possible while keeping each player's inventory as small as possible.
The sketch below illustrates the supply chain and the flow of information along with it:
Training Artificial Intelligence to Play the Beer Distribution Game Graph 1 EN
Initially, customer demand for beer is stable at 100 units per week and the entire supply chain is in a steady state. Each member of the supply chain has an inventory of 400 units.
The rules of the game are simple – in every round each player performs the following four steps:
  • Check deliveries. Check how many units of beer are being delivered to him from his supplier in the supply chain.
  • Check incoming orders. Check how many units of beer his client in the supply chain has ordered.
  • Deliver beer. Deliver as much beer as he can to satisfy the customers' demand (Note: in our implementation of the game above, this step is performed for you automatically).
  • Place outgoing order. The difficult step is to decide how many units of beer the player needs from his supplier to keep his inventory stocked up and to ensure he has enough beer to meet future demands.
If you have never played the game or want to understand the game dynamics in more detail, make sure you read the companion post Understanding The Beer Distribution Game.
Now that we know the structure of the game, let us learn about agent-based modeling and see how we can use agents to play the game.

A Brief Introduction to Agent-based Modeling

The basic concept behind agent-based models is quite simple:
  • An environment is populated with a set of agents.
  • Agents and the environment each have a set of (numerical) properties and each agent must always be in a defined internal state (e.g. active, idle, defect, …).
  • Agents can perform actions and interact with each other and with the environment by sending signals – the agents react to these events by updating their properties and/or changing their state.
Which events an agent sends or receives do not have to be deterministic – for instance, you can let this depend on a probability distribution based on the current state of the agent and its environment and the action it decides to take.
Training Artificial Intelligence to Play the Beer Game Graph 2
Agent-based simulation run in timesteps – in each timestep, all agents first receive the events that have been sent to them, then they act and send new events. Typically events are then received in the next time step, though in some cases “instantaneous” events are useful.
Thus to create an agent-based simulation, we need to:
  • Identify the relevant agents
  • Define the agents' properties
  • Define an action method, which describes what the agent does in each time-step, e.g. perform internal tasks and send events to other agents
  • Define handlers for each kind of event you want your agent to react to
  • For each agent, implement an initializer that sets the agents initial state
Defining the model is even easier:
  • Define the environment properties and update them when necessary
  • Tell the model which kinds of agents there are
Then, to configure the simulation, all we need to do is to set the initial values of the properties and instantiate the initial agents.
Let us model the beer distribution game using agents. Each player in the game is modeled by an agent, so we have five agents: the consumer and the agents that are part of the supply chain, ie. the retailer, the wholesaler, the distributor and the brewery.
The supply chain agents are essential all the same – they have an internal state that captures:
  • Inventory, i.e. how much beer the agent has in stock.
  • Outstanding Orders, i.e. how much beer the agent has ordered from his supplier but has not received yet.
  • Backorder, i.e. how much beer the agent's customer has ordered but which the agent has not delivered yet.
In each time step, the agent has to react to the following events:
  • Delivery. The beer delivered from the agent's supplier.
  • Order. The beer ordered by the agent's customer.
At the end of each time step, after the agent has received deliveries and orders, the agent has to make an order decision, which is then passed on to its supplier as an order.
Training Artificial Intelligence to Play the Beer Game Graph 3
Experience shows that in a typical game, the number of orders placed by the players is much higher than necessary: The graph below shows what happens when the consumer changes his order just once, at the beginning of the game, from 100 units of beer to 400 units and then leaves it at the new setting. This leads to a huge "whiplash" that ripples through the supply chain, causing the brewery to produce over 30,000 units of beer during peak times!
Training Artificial Intelligence to Play the Beer Game Graph 4
With this order behavior, the supply chain costs are way off target:
Training Artificial Intelligence to Train and Test Machine Learning Applications Graph 5
It is actually quite easy to define ordering policies that will lead to improved behavior (again it is best to read Understanding The Beer Distribution Game for details).
One simple strategy is to manage the order balance, i.e. the difference between incoming orders (i.e. those coming from the customer) and outgoing orders (i.e. those going to the supplier). Put simply, we would expect the balance to be a constant equal to the desired inventory, a sensible ordering strategy then would be:
Int(Max(Incoming Order+(2 * Incoming Order+ Target Inventory- Order Balance)/Inventory Adjustment Time),0))
The graphs below show how orders, inventory and costs develop under this strategy.
Training Artificial Intelligence to Play the Beer Game Graph 6
This policy is nice and simple and it is almost good enough to achieve the target of keeping the supply chain costs below $30,000.
Training Artificial Intelligence to Play the Beer Game Graph 7
The objective now is to use machine learning to find an order policy that keeps the actual supply chain costs close to the target supply chain cost, in the following we will do so using a reinforcement learning approach.

A Brief Introduction to Reinforcement Learning

Reinforcement Learning is a machine learning technique that focuses on learning from interaction: agents interact with their environment according to their "action policy" and receive rewards for their actions. The objective of reinforcement learning algorithms is to learn the right action policies, i.e. to find policies that maximize the rewards the agents receive for their actions.
There are different approaches to reinforcement learning and we will use a simple one known as Q-Learning in this notebook.
In Q-Learning, each agent has a "Q-Table", which defines the expected total reward an agent will get for performing a particular action when in a particular state. Agents initially start with an "empty" Q-Table and then learn the right behavior through trial and error.
Agents start with an "empty" Q-Table and then fill it by learning through trial and error. At each step, an agent always chooses to perform the action that will lead to the highest reward.
Training Artificial Intelligence to Play the Beer Game Graph 8
The following paragraphs dive a little deeper into the mathematics behind reinforcement learning. This is quite illustrative because it shows how you can deal with uncertainty and also how to deal with (extrinsic) reward systems – two capabilities that are very important in any enterprise. Feel free to skip to the next chapter if you are not interested in mathematical formalism.
The following exposition of reinforcement learning closely follows that in the seminal book Reinforcement Learning by Richard S. Sutton and Andrew G. Barto.
In order to make reinforcement learning computable, we need to find an appropriate formalism. One such formalism is to model such agent-environment interactions as Markov Decision Processes (MDPs):
At each time step tt the agent receives some representation of the environments state, StSS_t \in \mathcal{S}, and on that basis selects an action, AtAA_t \in \mathcal{A}. One time step later, (in part) as a consequence of its action, the agent receives a reward RtRRR_t \in \mathcal{R} \subset \mathbb{R}.
Training Artificial Intelligence to Play the Beer Game Graph 9
The interaction then gives rise to a trajectory that looks like this: S0,A0,R1,S1,A1,R2,S2,A2,R3,...S_0,A_0,R_1,S_1,A_1,R_2,S_2,A_2,R_3,...
In a finite MDP, the sets of states, actions and rewards (S\mathcal{S}, A\mathcal{A} and R\mathcal{R}) have a finite number of elements and the random variables RtR_t and StS_t have well defined probability distributions that depend only on the preceding state and action:
p(s,rs,a)=Pr{St=s,Rt=rSt1=s,At1=a}p\left(s',r \mid s,a\right) = Pr\{S_t=s', R_t=r \mid S_{t-1}=s, A_{t-1}=a\}
for all s,sS,rRs',s \in \mathcal{S}, r \in \mathcal{R}, and aA(s)a \in \mathcal{A}(s). The function pp defines the dynamics of the MDP which defines a probability distribution for each choice of ss and aa:
sSrRp(s,rs,a)=1,sS,aA(s)\sum_{s' \in \mathcal{S}} \sum_{r \in \mathcal{R}} p\left(s',r \mid s,a\right) = 1, \forall s \in \mathcal{S}, a \in \mathcal{A}(s)
Using pp, we can calculate anything we need to know about an agents behavior, e.g the state-transition probabilities:
p(ss,a)=Pr{St=sSt1=s,At1=a}=rRp(s,rs,a)p\left(s' \mid s,a\right) = Pr\{S_t=s'\mid S_{t-1}=s, A_{t-1}=a\}=\sum_{r \in \mathcal{R}} p\left(s',r \mid s,a\right)
In particular, we can calculate the expected reward if we are in state ss and choose action aa:
r(s,a)=E[RtSt1=s,At1=a]=rRrsSp(s,rs,a)r\left(s,a\right) = \mathbb{E} \lbrack R_t \mid S_{t-1}=s, A_{t-1}=a \rbrack = \sum_{r \in \mathcal{R}} r \sum_{s' \in \mathcal{S}} p\left(s',r \mid s,a\right)

Goals And Rewards

In order to get our agents to learn, we not only need to define the reward an agent gets for performing an action, but we also need to tell the agents to maximize the sum of rewards over its lifetime, i.e the lifetime reward.
Formally we can define the objective of reinforcement learning to maximize the expected lifetime reward, which is defined as:
GtRt+1+γRt+2+γ2Rt+3+=k=0γkRt+k+1G_t \doteq R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3} +\dots = \sum_{k=0}^\infty \gamma^k R_{t+k+1}
where 0γ10 \leq \gamma \leq 1 is the discount rate (which is needed to ensure GG converges in infinite MDPs).
Note that GG can also be defined recursively:
Gt=Rt+1+γGt+1G_t = R_{t+1}+\gamma G_{t+1}

Action Policies

Now that we know we want to optimize the lifetime reward, we need to help our agent to learn an optimal policy. A policy is a mapping from states to the probability of selecting a possible action.
If the agent is following policy π\pi at time tt, then π(as)\pi(a \mid s) is the probability that At=aA_t=a if St=sS_t=s.
Note that π\pi is distinct from the probability distribution pp defined above – the probability distribution pp gives us information about the environment, it tells us the reward an agent will get for performing certain actions and which state the agent will transition to; the probability distribution π\pi gives us information about the agent itself, it tells us the probability an agent will choose a particular action in a particular situation.
This is illustrated in the diagram below.
Training Artificial Intelligence to Play the Beer Game Graph 10
Given π\pi and pp we can calculate the probability an agent will transition into a particular state ss' given that it is in state ss:
pπ(ss)=aπ(as)rp(s,rs,a)p_\pi (s' \mid s) = \sum_a \pi(a \mid s) \sum_{r} p\left(s',r \mid s,a\right)
and the expected reward when transitioning from state ss at time tt:
rπ(s)=Eπ[Rt+1St=s]=rraπ(as)sp(s,rs,a)r_\pi (s) = \mathbb{E}_{\pi}\lbrack R_{t+1} \mid S_t=s \rbrack = \sum_{r} r \sum_a \pi(a \mid s) \sum_{s'} p\left(s',r \mid s,a\right)

Value Functions

Using the action policy π\pi and the environment dynamic pp, we can now calculate the value we expect to achieve from the policy: The value function of a state ss under a policy π\pi, denoted vπ(s)v_\pi (s), is the expected return when starting in ss and following π\pi thereafter.
For our MDPs, we can define vπv_\pi by:
vπ(s)Eπ[GtSt=s]=Eπ[k=0γkRt+k+1St=s],sSv_\pi (s) \doteq \mathbb{E}_{\pi}\lbrack G_t \mid S_t=s \rbrack = \mathbb{E}_{\pi}\lbrack \sum_{k=0}^\infty \gamma^k R_{t+k+1} \mid S_t=s \rbrack, \forall s \in S
We can then define value of taking action aa in state ss under a policy π\pi as follows:
qπ(s,a)Eπ[GtSt=s,At=a]=Eπ[k=0γkRt+k+1St=s,At=a],sS,aAq_\pi (s,a) \doteq \mathbb{E}_{\pi}\lbrack G_t \mid S_t=s, A_t=a \rbrack = \mathbb{E}_{\pi}\lbrack \sum_{k=0}^\infty \gamma^k R_{t+k+1} \mid S_t=s, A_t=a \rbrack, \forall s \in S, a \in A
qπq_\pi is called the action value function for policy π\pi. It represents the expected value of our policy if we start in state ss, then take action aa and follow the policy π\pi thereafter. The important point here is that qπq_\pi not only captures the immediate reward we get for our next action, but also the expected reward the agent will get thereafter.

Bellman Equation

Given the recursive nature of the lifetime reward, we can answer the question of how the value of an agents current state is related to the value of its future state if it follows a policy and takes an action now: after all, once we have transitioned from ss to ss', nothing changes from a conceptual perspective.
vπ(s)Eπ[GtSt=s]=Eπ[Rt+1+γGt+1St=s]=Eπ[Rt+1St=s]+γEπ[Gt+1St=s]\begin{aligned} v_\pi (s) & \doteq \mathbb{E}_\pi \lbrack G_t \mid S_t=s \rbrack \\ & = \mathbb{E}_{\pi}\lbrack R_{t+1}+\gamma G_{t+1} \mid S_t=s \rbrack \\ & = \mathbb{E}_{\pi}\lbrack R_{t+1} \mid S_t=s \rbrack +\gamma \mathbb{E}_{\pi}\lbrack G_{t+1} \mid S_t=s \rbrack \end{aligned}
The first summand here is simply the reward we expect from taking an action and moving from ss to ss', more explicitly:
rπ(s)=Eπ[Rt+1St=s]=rraπ(as)sp(s,rs,a)r_\pi (s) = \mathbb{E}_{\pi}\lbrack R_{t+1} \mid S_t=s \rbrack = \sum_{r} r \sum_a \pi(a \mid s) \sum_{s'} p\left(s',r \mid s,a\right)
The second part is slightly more complicated – it defines the expected lifetime reward from time t+1t+1 but starting at time tt in state ss. To calculate this we need to calculate the value function for each potential future state ss', weighted by the probability of actually getting to that state:
Eπ[Gt+1St=s]=svπ(s)pπ(ss)=svπ(s)aπ(as)rp(s,rs,a)\mathbb{E}_{\pi}\lbrack G_{t+1} \mid S_t=s \rbrack = \sum_{s'} v_\pi (s') p_\pi (s' \mid s) = \sum_{s'} v_\pi (s') \sum_a \pi(a \mid s) \sum_{r} p\left(s',r \mid s,a\right)
The second part of the equation is visualized in the diagram below.
Training Artificial Intelligence to Play the Beer Game Graph 11
Putting these terms together and rearranging the sums, we end up with:
vπ(s)=aπ(as)srp(s,rs,a)[r+γvπ(s)]v_\pi (s) = \sum_a \pi(a \mid s)\sum_{s'} \sum_{r} p\left(s',r \mid s,a\right)\lbrack r+\gamma v_\pi (s')\rbrack
This is known as the Bellman equation for vπv_\pi. It expresses a relationship between the value of a state and the values of successor states: The value of the start state must equal the (discounted) value of the expected next state, plus the reward expected in getting to that state.

Optimal Policies And The Optimal Value Function

Now that we know about policies and value functions, we can try looking for the best possible, optimal value function. i.e. the one that achieves the highest reward over the long run.
Using value functions, we can define an ordering of policies: ππ\pi \geq \pi'if and only if vπ(s)vπ(s)v_\pi (s) \geq v_{\pi'} (s) for all sSs \in \mathcal{S}.
There will always be at least one policy that is better than or equal to all other policies – this is an optimal policy. Let us denote this by π\pi_*, which has the optimal state-value function vv_*, defined as:
v(s)maxπvπ(s)v_* (s) \doteq \max\limits_\pi v_\pi (s)
Optimal policies share the same optimal action-value function qq_*, defined as:
q(s)maxπqπ(s,a)=E[Rt+1+γv(St+1)St=s,At=a]\begin{aligned} q_* (s) & \doteq \max\limits_\pi q_\pi (s,a) \\ & = \mathbb{E}\lbrack R_{t+1}+\gamma v_* (S_{t+1}) \mid S_t=s, A_t=a \rbrack \end{aligned}
Because vv_* is the value function for a policy, it must satisfy the Bellman equation. However, because it is the optimal value function, we can actually write this in a special form without reference to any specific policy. This form is called the Bellman optimality equation:
v(s)=maxπqπ(s,a)=maxaEπ[Rt+1+γGt+1St=s,At=a]=maxaEπ[Rt+1+γv(St+1)St=s,At=a]=maxas,rp(s,rs,a)[r+γv(s)]\begin{aligned} v_* (s) & = \max\limits_\pi q_\pi (s,a) \\ & = \max\limits_a \mathbb{E}_{\pi_*}\lbrack R_{t+1}+\gamma G_{t+1} \mid S_t=s, A_t=a \rbrack \\ & = \max\limits_a \mathbb{E}_{\pi_*}\lbrack R_{t+1}+\gamma v_* (S_{t+1}) \mid S_t=s, A_t=a \rbrack \\ &= \max\limits_a \sum_{s', r} p\left(s',r \mid s,a\right)\lbrack r+\gamma v_* (s')\rbrack \end{aligned}
Intuitively, the Bellman optimality equation expresses the fact that the value of a state under an optimal policy must equal the expected return for the best action from that state.
The Bellman optimality equation for qq_* is:
q(s,a)=E[Rt+1+γmaxaq(St+1,a)St=s,At=a]=s,rp(s,rs,a)[r+γmaxaq(s,a)]\begin{aligned} q_* (s,a) & = \mathbb{E}\lbrack R_{t+1}+\gamma \max\limits_{a'} q_* (S_{t+1},a') \mid S_t=s, A_t=a \rbrack \\ &= \sum_{s', r} p\left(s',r \mid s,a\right)\lbrack r+\gamma \max\limits_{a'} q_* (s',a')\rbrack \end{aligned}

Solving The Bellman Equation:Temporal Difference Learning/Q-Learning

Reinforcement learning then essentially boils down to solving the Bellman optimality equation. For finite MDPs, such a solution will always exist - because the Bellman optimality equation defines a system of equations, once for each state. So if there are nn states, you end up with nn equations in nn unknowns.
Once you have vv_*, it is quite easy to derive the optimal policy: for each state ss, there will be at least one state for which the maximum is obtained in the Bellman optimality equation. Any policy that assigns nonzero probability only to these actions is an optimal policy.
If you have qq_*, choosing an optimal policy is even easier - you simply have to find an action that maximizes q(s,a)q_* (s,a):
π(as)>0,aargmaxaq(s,a)π(as)=0,aargmaxaq(s,a)\begin{aligned} \pi_* (a \mid s) & > 0, a \in argmax_a q_*(s,a) \\ \pi_* (a \mid s) & = 0, a \notin argmax_a q_*(s,a) \end{aligned}
Because the state space can be extremely large, it is rarely practical to find an exact solution to vv_*or qq_*. Fortunately, a number of approaches exist to finding approximate solutions.
One way of estimating vv is to use a Monte Carlo approach: we start with an initial approximation VV for vv (i.e. for the expected total reward following that state) and then run through our simulation a large number of times - each time we run the simulation we will wander through a different trajectory defined by π\pi_* and the environment dynamic pp.
During each run, we cache the actual value for GtG_t (i.e. the actual total reward following that state) for each state and time step. At the end of the run we update V(St)V(S_t) using the following formula, where α\alpha is a constant that defines how quickly we move towards the new value:
V(St)V(St)+α[GtV(St)]V(S_t) \gets V(S_t) + \alpha \lbrack G_t - V(S_t) \rbrack
The Monte Carlo approach works fine, but it does have one issue: you need to wait until the end of each simulation to actually calculate GtG_t, which means this process is very slow.
Fortunately, there is a much faster approach, which allows us to update VV after each time step. This approach works thanks to the recursive nature of the Bellman equation.
Remembering that Gt=Rt+1+γGt+1G_t = R_{t+1}+\gamma G_{t+1}, we can use the following formula to update VV after every time step:
V(St)V(St)+α[Rt+1+γV(St+1)V(St)]V(S_t) \gets V(S_t) + \alpha \lbrack R_{t+1}+\gamma V(S_{t+1}) - V(S_t) \rbrack
This approach to learning is referred to as Temporal Difference Learning because it compares values of VV that are separated by a (short) temporal difference.
In this notebook, we focus on a variant of Temporal Difference Learning known as Q-Learning because it finds a solution for qq_* instead of vv_*:
Q(St,At)Q(St,At)+α[Rt+1+γmaxaQ(St+1,a)Q(St,At)]Q(S_t,A_t) \gets Q(S_t,A_t) + \alpha \lbrack R_{t+1}+\gamma \max\limits_a Q(S_{t+1},a) - Q(S_t,A_t) \rbrack
Given this, we can define the basic Q-Learning algorithm as follows:
Start with an initial (random) Q(s,a)Q(s,a), ensuring that Q(terminalstate,.)=0Q(terminal state,.) = 0
Run the simulation a large number of times (episodes), with the following loop for each episode:
  1. Initialize S
  2. Loop for each step of the episode:
    1. Choose A from S using ϵ\epsilon-greedy policy based on Q
    2. Take action A, observe R, S'
    3. Q(St,At)Q(St,At)+α[Rt+1+γmaxaQ(St+1,a)Q(St,At)]Q(S_t,A_t) \gets Q(S_t,A_t) + \alpha \lbrack R_{t+1}+\gamma \max\limits_a Q(S_{t+1},a) - Q(S_t,A_t) \rbrackSSS \gets S'
  3. Until S is terminal or simulation reaches a predefined end
We use the ϵ\epsilon-greedy policy in step 2.A to ensure that our algorithm does not get locked into a local optimum: using a small ϵ>0\epsilon > 0 that defines a probability that we will choose a random action instead of one defined by our policy (ϵ\epsilon-greedy approach).
The step size α(0,1]\alpha \in (0, 1\rbrack defines how quickly we move from our current value of QQ to a new value and γ\gamma is the discounting factor.

Applying Q-Learning to The Beer Distribution Game

Now let us apply the theory above to our beer distribution game agents. Our "learning" consists of finding the right values for the function Q(S,A)Q(S,A), which tells the agent which action to take if it is in State A.
We set ϵ=0.1\epsilon=0.1 and α=0.2\alpha=0.2. Because the beer distribution game runs over a finite set of 24 rounds, we can set γ=1\gamma=1.
We store QQ in a Q-Table, which is essentially a matrix indexed by the state of the agent and possible actions. For each entry, we store the value Q(S,A)Q(S,A), which is our approximation of the optimal value q(s,a)q(s,a), i.e. the value we expect to achieve if we are in state ss and choose action aa.
Because the number of states and actions is potentially infinite (an agent can order any amount of beer), we use a sparse Q-table, i.e. a table that is empty and just stores the values that differ from the default value. Our implementation of the Q-Table
sparseQTable
stores values in a dictionary whose key is defined by the tuple (s,a)(s,a). For the beer distribution game, we set the default value to 0, i.e. we expect 0 value from an action we have never tried before.
Next to adding and reading values, the Q-table provides two key methods:
  • best_action (state) This method returns the best action for a given state, i.e. the action that maximizes Q(S,A)Q(S,A) for a given state SS.
  • max_action_value (state)   This method returns the best value that can be achieved for a given state, i.e. Q(S,best_action(S))Q(S, best\_action(S)).
Each supply chain agent has its own Q-table and we capture the agent's state via a single value, namely the order balance introduced above.
The implementation of the actual training algorithm is remarkably simple, which is why we have included the complete code below.
Each agent remembers its
last_state
and its
last_action
, which is equal to the order of beer that is placed by the agent.
# read the last action value from the q table
last_action_value = self.q_table.read_value(
self.last_state,
self.last_action
)
# define the current state, which is the last order balance
# modified by the incoming order
current_state = self.agent.order_balance-self.agent.incoming_order
# work out the next action value for the current state, which is the order
# balance minus the new incoming order
max_next_action_value = self.q_table.max_action_value((current_state,))
# work out the new action value based on the q-learning algorithm introduced above
# - this is necessary because the reward is calculated at the end of the step,
# i.e the update can only be done in the next timestep.
new_action_value = (
last_action_value +
self.agent.model.alpha*(
self.agent.reward +
self.agent.model.gamma * max_next_action_value -
last_action_value
)
)
# update the action value for the last state and the last action
self.q_table.add_value(self.last_state, self.last_action, new_action_value)
# now decide on which action to take, either by looking up the best value
# in the q-table or by choosing a random number (𝜖-greedy approach)
if random.uniform(0, 1) < self.agent.model.epsilon:
# once in a while we choose a random amount to ensure
# we are not locked into a local maximum
order_amount = random.randint(0, round(1.5*self.agent.incoming_order))
else:
# look up the best action for this state in the q-table
order_amount = self.q_table.best_action((current_state,))
# now remember the state and action taken,
# as a starting point for the next round
self.last_state = (current_state,)
self.last_action = order_amount
As you can see, this is simply a concrete implementation of the abstract Q-Learning algorithm defined above.
Next, we need to think about the reward we want to give the agents for their performance. We have to be quite careful here because we know what a good order strategy should look like and we want to avoid coding the strategy directly into the rewards.
In principle, it is quite simple to define rewards: the objective of the game is to keep both the total supply chain cost and also the individual supply chain cost within certain targets. In order to do this in an efficient manner, it makes sense to define some milestones along the way and also to stop a game as soon as the agents miss their target.
Training Artificial Intelligence to Play the Beer Game Graph 12
Clearly then we need to give the agents a high reward if they achieve these targets - the rewards are distributed at the end of each round in the
end_round
method of the simulation model.
if (agent.order_balance < 0 or agent.order_balance > 1400):
self.game_over = True
# run one more round to pickup the rewards, then stop
self.game_over_round = time + 1
reward = -10000
elif agent.outgoing_order < agent.incoming_order:
reward = -10000
The only issue here is that this is a reward that comes right at the end of the game and things can go very wrong long before we get to the end.
Which guidance can we give the agents along the way?
The first thing we can do is to penalize the agents as soon as they miss the game target (which can happen very early on in the game). In this case, we penalize the agents and stop the game for that episode to avoid filling the Q-table with unnecessary information. We also penalize the agents if they do not order at least as much as the incoming order.
if (agent.order_balance < 0 or agent.order_balance > 1400):
self.game_over = True
self.game_over_round = time + 1
# run one more round to pickup the rewards, then stop
reward = -10000
elif agent.outgoing_order < agent.incoming_order:
reward = -10000
Unfortunately, this still leaves us with very many possible paths, especially given the fact that we are training not only one agent but a whole supply line. Thus to make the learning process a little quicker and keep our Q-table small, we define some milestones along the way:
if time == 3:
if 750 >= agent.order_balance >= 600:
reward += 2000
if time == 5:
if 900 >= agent.order_balance >= 700:
reward += 2000
if time == 10:
if 1150 >= agent.order_balance >= 1000:
reward += 2000
if time == 15:
if 1250 >= agent.order_balance >= 1100:
reward += 2000
if time == 20:
if 1250 >= agent.order_balance >= 1150:
reward += 2000
Now let us see how this works out in practice. The following code runs the beer distribution game over ten episodes and returns the total supply chain reward ... in the long run, we would like the reward to become less and less negative.
bptk.train_simulations(
episodes=10,
scenario_managers=["smBeergameQlOB"],
scenarios=["train_agents"],
agents=["controlling"],
agent_states=["active"],
agent_properties=["supply_chain_reward"],
agent_property_types=["total"],
return_df=False,
progress_bar=True
)
Training Artificial Intelligence to Play the Beer Game Graph 13
Using the trained model we receive the following results:
Training Artificial Intelligence to Play the Beer Game Graph 14
We can see that the agents have not learned much after ten training episodes ... the retailer is ordering far too much, producing very high costs.
Training Artificial Intelligence to Play the Beer Game Graph 15
Actually, training the agents takes quite a few episodes and it takes around 50,000 episodes to produce satisfactory results (this is still small if you consider the number of potential paths, which is essentially infinite because the agents could order any amount of beer). Because it takes 2-3 hours to run that many episodes (on a Macbook Pro with 16 GB of RAM), we've provided pretrained Q-tables for 12500, 25000, 37500 and 50000 for you to experiment with (see the data folder on our GitHub repository).
Now we can take a look at the results - it looks quite good already after 37,500 training episodes, but the retailer's costs are still a little high.
Training Artificial Intelligence to Play the Beer Game Graph 16
Thus, the total supply chain costs are still a little too high:
Training Artificial Intelligence to Play the Beer Game Graph 17
So let's see what happens after 50,000 episodes:
Training Artificial Intelligence to Play the Beer Game Graph 18
After 50,000 rounds of training, the total costs are within the target and so are the individual costs:
Training Artificial Intelligence to Play the Beer Game Graph 19
It is interesting to compare the results of our rule-based ordering policy to the Q-Learning approach - the approach learned using Q-Learning actually performs better. While the 'Order Balance' policy is robust and works whatever the order behavior of an agent in the supply line, the Q-Learning algorithm is trained towards a particular order behavior.
Training Artificial Intelligence to Play the Beer Game Graph 20

Summary

This post introduced the concept of agent-based simulation and reinforcement learning and applied them to a simple supply chain simulation that simulates the Beer Distribution Game.
It illustrates how simulations can be used to create a computational model of reality and how reinforcement learning can then be used to find good decision-making policies - in our case, the policy found through reinforcement learning actually performs better than the rule-based policy.
The main issue in reinforcement learning is finding the right reward policies and in the end, the agents are trained towards those policies - for our beer distribution game simulation, this means that our agents are now good at playing the beer distribution game for the given behavior of the consumer. But if the consumer changes his behavior, the agents will not know how to deal with this. This is actually quite similar to real-life situations, where it is quite difficult to set sensible extrinsic rewards.
In our case, the rule-based ordering policy is so simple that an extrinsic reward system would most likely just end up encoding that policy as a set of extrinsic rewards.
You can access the underlying code containing the simulation, the pre-trained models and Jupyter notebooks showing how to use them from our GitHub repository.