ShipItParrot
December 27, 2022
·
19 minutes read
Oh parrots!
If you are an experienced parrot who has dealt with dynamic programming problems like the infamous climbing stairs problem
and you want to strengthen your dynamic programming fundamentals, this article is for you!
We will also use two lower level concepts! It will help you a lot if you have this! If not, don't worry!
have good
- detailed check up
- Memorization with two-dimensional dictionaries/lists
In this article, our goal is to share an interesting mid-level technical Leetcode interview question about dynamic programming; Hestone game problem.
First, let's solve the pickaxe-sized pebble game problem and understand it! We pick you with illustrations and examples!
Since we parrots understand the game, let's take the simplest approach; the naive and recursive depth-first approach. We'll show you how it's really easy to implement, but with a longer input length it becomes super inefficient!
Then we introduce the idea of dynamic programming; the art of breaking down a problem into smaller sub-problems and efficiently reusing the solutions of these smaller problems to compute the solution to the overall problem.
We will examine two types of dynamic programming solutions!
The first is the top-down approach. It is the simpler of the two and extends the naive, deep-recursive approach to reusing solutions of smaller sub-problems.
The second is the bottom-up non-recursive approach! We'll build an iterative solution from scratch and show how it's twice as fast compared to the top-down approach!
As a bonus, we also show you how we can solve this question in O(1) time with a single return statement.
In my experience with parrots, this was an overwhelming leetcode question that was very difficult to solve! I slipped a lot, I hope this article is useful to you!
With that, let's investigate the problem!
parrots! Please take the time to read and understand the question!
In my parrot experience, I found the explanation of the cryptic question (to be fair most leetcode questions are like that) and found it easier to understand the problem after going through one of the provided test cases.
Let's go through the explanation of the first test case!
Input: Stacks = [5,3,4,5]
Output: true
Explanation:
Initial stack: [5,3,4,5]
Alice starts first and can only take the top 5 or the bottom 5.
Suppose Alice wins the top 5
Row: [3, 4, 5].
See Bob Leva 3
Line: [4, 5]
Alice takes 5 to win by 10 points.
If Bob takes the last 5
Line: [3, 4]
Alice takes 4 to win by 9 points.
This example shows that taking the top 5 was a winning move for Alice, so we return true.
Your intelligent parrots would ask:
"The question said that Alice and Bob play ideally. Does that mean they can only see the two stacks at each end of the row and pick the biggest one?
And that's a great question! From my experience with parrots, I assumed they could only see the two stacks, which turned out not to be the case!
Both Alice and Bob can see all the stacks in the row and preemptively select them so the other player doesn't have access to the largest stack later!
That's a lot of words! You parrots would throw seeds at me!
Let's soil our wings with a second example.
Input: Stacks = [3,7,2,1]
Output: true
Alice goes first.
Alice can choose3
, leaves the stack
7,2,1
Oh it is1
, leaves the stack
3,7,2
In this case, if Alice chooses it3
, she would instantly gain more stones on this step than if she took1
. choose however3
would suspend7
to Bob in the next round, giving Bob an advantage.
In this case, the ideal would be to choose1
. This leaves Bob with the stack.
3,7,2
force him to choose between them3
mi2
. Each of Bob's choices will eventually be revealed7
to Alice, and Bob has no choice but to choose3
as an ideal move.
This leaves the stack to be
7,2
Let Alice choose7
, Broken2
for Bob
In this ideal work:
Alice would have two cairns;7
mi1
, give8
Piedras
Bob would have two cairns;3
mi2
, To give him5
Piedras
Alicia will win with this ideal game. The decisive winning move was the choice of the last one1
about the first3
at the beginning.
This second example shows that Alice and Bob's ideal move is not necessarily to take the larger of the two edge stacks.
Your intelligent parrots would ask:
"What!? Depth-first search? How is this problem expressed as an array of numbers as a graphics problem?"
This problem can be expressed as a graphical problem! We can express each tile game scenario as a node on a scenario diagram.
Let's take the second test case as an example!
Input: Stacks = [3,7,2,1]
Output: true
Each scenario state would have the following variables:
- The cairns are in the row.
- The number of stones Alice has
- The number of checkers Bob has
The first scenario has the following variables:
Stack = [3,7,2,1]
alice_stones: int = 0
bob_piedras: int = 0
At this point we have two options:
In the first option, Alice chooses first3
, this gives us the following scenario (or nodes in the graph)
Stack = [7,2,1]
alice_stones: int = 3
bob_piedras: int = 0
In the second option, Alice chooses the last one1
, which gives us the knot
Stack = [3,7,2]
alice_stones: int = 1
bob_piedras: int = 0
And so we express this problem as a graph! If we play together, for each of these two scenarios we will also have two scenarios for each of them.
Your intelligent parrots would ask:
“This graphic is really confusing! How do we know if moving house is ideal for Alice and Bob?
That's a great question!
An optimal move for a player is one in which he anticipates receiving more checkers than the other player. Let's call this the Stone Edge.
Unfortunately, this means we have to look into the future and visit later scenarios to quantify this!
In short, if we look at the scenario graph, we find an ideal scenario:
Let's try to quantify the advantage of Alice's move to choose the latter.1
!
3,7,2,1
Alice would choose1
3,7,2
Bob would choose3
7,2
Alice chooses7
, give8
Piedras
2
bob elige2
, To give him5
Piedras
This means that Alice's move must be the last one to be chosen1
, which gave Alice a 3-ball advantage over Bob!
When we examined this graphic, we found an ideal second game!
3,7,2,1
Alice would choose1
(same choice as ideal first move)
3,7,2
Bob would choose2
(another choice)
3,7
Alice chooses7
, give8
stones (same choice)
2
bob elige2
, To give him5
stones (other choice)
Now that we parrots have successfully expressed this problem as a scenario graph, we can walk through the simpler graph traversal algorithm; deep search!
Our goal is to find the scenario where Alice and Bob play optimally and consistently.
Here are the conditions for this scenario!
- All movements for this phase should be great.
- An optimal move for a player is one in which he anticipates receiving more checkers than the other player. Let's call this the Stone Edge.
- Unfortunately, looking ahead also means we need to run through all the scenarios to quantify the real benefits of switching.
Let's use the simplest implementation of depth-first search; recursion!
solution class:
def stoneGame(self, stacks: List[int]) -> bool:
"""
The ideal move would be the scenario where
- Looking ahead, the move would give the player the greatest stone advantage over the other player.
"""def dfs(esquerda: int = 0, direita: int = len(piles) - 1, alice_stones: int = 0, bob_stones: int = 0) -> tuple[int, int]:
"""
Scrolls through the scenario diagram. Returns the marbles Alice and Bob have after they play optimally
In each iteration we have
- A left pointer representing the unselected stack visible to the left of the row
- A right pointer representing the unselected stack visible to the right of the row
In each iteration, we visit the scenario where Alice/Bob visits a bunch of rocks.
"""
if left > right:
"""
If the left index is to the right of the right index, it means the stacks are now empty.
return the alice_stones and bob_stones we have acquired so far
"""
back alice_stones, bob_stones
# since Alice goes first and initial stack lengths are guaranteed to be the same
# Alice's turn is when the length of the stacks (length between left and right) is equal
alice_turn: bool = (left + right + 1) % 2 == 0
see alice_turn:
# Visit the following scenario: Alice chooses the stack on the left.
# Calculate the marbles that Alice and Bob have if Alice plays optimally after choosing the left stack
left_alice_stones, left_bob_stones = dfs(links + 1, rechts, alice_stones + Stacks[links], bob_stones)
left_alice_advantage = left_alice_stones - left_bob_stones
# Visit the following scenario: Alice chooses the correct deck. C
# Calculate the marbles that Alice and Bob have if Alice plays optimally after choosing the left stack
right_alice_stones, right_bob_stones = dfs(left, right - 1, alice_stones + stacks[right], bob_stones)
right_alice_advantage = right_alice_stones - right_bob_stones
wenn left_alice_advantage > right_alice_advantage:
# If Alice chooses the left stack, there is a better advantage, Alice chooses the left stack
Entwickler (left_alice_stones, left_bob_stones)
Others:
# Otherwise Alice chooses the right one
Entwickler (right_alice_stones, right_bob_stones)
Others:
# Visit the following scenario: Bob takes the left stack.
# Calculate the marbles that Alice and Bob have if Bob plays optimally after choosing the left stack
left_alice_stones, left_bob_stones = dfs(links + 1, rechts, alice_stones, bob_stones + Stacks[links])
left_bob_advantage = left_bob_stones - left_alice_stones
# Visit the following scenario: Bob chooses the correct stack. calculate the advantage of bob over alice
# Calculate the marbles that Alice and Bob have if Bob plays optimally after choosing the right stack
right_alice_stones, right_bob_stones = dfs(left, right - 1, alice_stones, bob_stones + stacks[right])
right_bob_advantage = right_bob_stones - right_alice_stones
siehe left_bob_advantage > right_bob_advantage:
# If Bob takes the left stack, there is a better advantage, Bob takes the left stack
Entwickler (left_alice_stones, left_bob_stones)
Others:
# otherwise, Bob chooses the right one
Entwickler (right_alice_stones, right_bob_stones)
# If Alice's ideal checkers are larger than Bob's, return True
alice_stones_optimization, bob_stones_optimization = dfs()
print(f"best_alice_stones: {best_alice_stones}, best_bob_stones: {best_bob_stones}")
return optima_alice_stones > optima_bob_stones
Now your smart parrots will ask:
"How do we quickly test whether this approach works?"
Big question! Let's run this with the same test case.[3, 7, 2, 1]
! We need to get the scenario where Alice wins with 8 stones and Bob loses with 5 stones.
# If Alice's ideal checkers are larger than Bob's, return True
alice_stones_optimization, bob_stones_optimization = dfs()
print(f"best_alice_stones: {best_alice_stones}, best_bob_stones: {best_bob_stones}")
return optima_alice_stones > optima_bob_stones
And we did it!
However, when we submit this solution, we get a timeout.
Let's take a step back, parrots!
Unfortunately, if N is the size of the stacks, our time complexity here is O(2^N).
The time required to run this algorithm increases exponentially with N! This is too slow!
Your intelligent parrots will ask:
"Why is our time complexity here O(2^N)?"
As we parrot, take a look at the stage diagram above
We can see that the number of possibilities we need to visit with depth-first search doubles each time we add a new stackbatteries
!
Now your smart parrots will ask:
"What about our spatial complexity here?"
The space complexity here is the maximum number ofdfs
Features (or maximum depth) in our memory call list.
We see that for stacks of size 2 we have a maximum depth of 3dfs
Functions that use memory for variables on the call stack.
For size 3 posts we have a maximum depth of 4dfs
Functions on the call stack.
And for size 5 piles, we have a maximum depth of 5dfs
Functions on the call stack.
We can see that the size ofbatteries
scales linearly with the maximum depth/memory we use in our call stack. If N has the size ofbatteries
, we can say that our spatial complexity is O(N).
Time Complexity: O(2^N)
Spatial Complexity: O(N)
Some of you smart parrots will ask:
"Do you really have to explore all possibilities?"
Big question!
Not necessarily! This question is a motivation for our next approach!
On the face of it, we had to run through all the scenarios here to make sure we ended up with the optimal play for Alice and Bob, since the greedy strategy of getting the biggest stack isn't necessarily the optimal strategy.
So we decided to bite the bullet and draw the scenario diagram! Fortunately, we see a pattern on the chart!
We have many scenarios sharing the same stack! They're actually the same scenario, but with a different number of tiles for Alice and Bob!
However, each of these scenarios sharing the same stacks have different Alice checkers and Bob checkers!
Now you will ask clever parrots:
"How do we express this scenario graph so that all these scenarios with the same color have the same state? In this way we can make these scenarios identical and avoid visiting them again!
Now that's a smart idea!
Alternatively in any scenario...
Instead of leaving each stack with the current number of checkers that Alice and Bob have
We can have each state of the stack provide the additional number of checkers that Alice and Bob can get, as long as Alice and Bob play optimally.
We will calculate this using the chart sheets!
Let's redraw our stage box!
From my experience with parrots, redrawing this chart was awesome, so let me share some examples!
For the empty stack scenario:[]
(blue shaded backgrounds) there are no stacks to give away, so the extra tile count for Alice and Bob is 0.
For a batch of 1 item:[2]
(Backgrounds shaded red), Bob is next to be chosen due to his unusual size. Bob chooses the last 2. This scenario gives Alice 0 checkers and Bob 2.
For a stack of 2 items:[7, 2]
(yellow shaded backgrounds) as it is one size fits all, Alice is the next choice. Alice chooses the best pick 7 move and Bob chooses the last 2. This gives Alice 7 checkers and Bob 2.
For a stack of 3 items:[7, 2, 1]
, since it's an odd size, Bob is the next to pick. Bob chooses the best move with option 7 and Alice chooses the best move with option 2. Bob takes the last stone. That's 2 for Alice and 8 for Bob.
For the second batch of 3 items:[3, 7, 2]
, Bob chooses the best move from choice 3 and Alice chooses the best move from choice 7. Bob chooses the remaining 2 moves. That's 7 for Alice and 5 for Bob.
For the stack of 4 elements:[3, 7, 2, 1]
, Alice chooses the best move by choosing 1, Bob chooses the best move by choosing 3, Alice chooses the best move by choosing 7, and Bob chooses the last 2. That gives 8 for Alice and 5 for Bob .
We now have duplicate levels on the map with the exact same stacks and extra stones for Alice and Bob!
This means we can avoid looking at the same scenario again!
For example, if we visit the scenario with lots[3]
Once we can avoid visiting again!
We go through the graphic in depth, visiting the left child before the right child.
To make it clearer, we're highlighting the places we've visited before!
This strategy of saving and reusing the calculated checkers for Alice and Bob for each smaller stack has a fancy name; Dynamic Programming!
Let's take a detour! What is dynamic programming?
Dynamic programming is a method of solving complex problems by breaking them down into smaller, simpler sub-problems and storing and reusing the solutions to those sub-problems.
Just like us here!
We've broken the problem of finding the number of bricks for Alice and Bob for a larger pile into smaller sub-problems, which are finding the number of bricks for them in a smaller pile!
We then store the solutions in a data structure like a dictionary and reuse the solutions to avoid recalculation!
In my experience as a parrot, we usually store our DP solutions in an array or dictionary!
Now let's try to do this together!
solution class:
def stoneGame(self, stacks: List[int]) -> bool:
"""
dp represents the pre-computed number of checkers for Alice and Bob, assuming that Alice and Bob play optimally
the key is a tuple of indices; the left pointer to stacks and the right pointer to stacksgiven (0, 2) as key, for stacks = [3, 7, 2, 1]
dp[(0, 2)] represents the number of checkers for Alice and Bob for [3, 7, 2]
since it's your bob's turn to move when the stacks are odd sized...
Bob chooses the ideal move from 3, Alice chooses the ideal move from 7, and Bob makes the last 2
Alice has 7 stones, Bob has 5 stones
pd[(0, 2)] = (7, 5)
"""
dp: dict[tupla[int, int], tupla[int, int]] = {}
def dfs(links: int = 0, rechts: int = len(stacks) - 1) -> tuple[int, int]:
"""
Scrolls through the scenario diagram.
In each iteration we have
- A left pointer representing the unselected stack visible to the left of the row
- A right pointer representing the unselected stack visible to the right of the row
In each iteration, we visit the scenario where Alice/Bob visits a bunch of rocks.
"""
not the location of great_bob_stones
if left == right:
"""
If the stacks only have 1 stack, say [7], Bob is next.
Bob takes the last stack, leaving 0 for Alice
"""
dp[(links, rechts)] = (0, Stapel[links])
return dp[(links, rechts)]
if (links, rechts) in dp:
"""
If we already have the solution, use the solution.
"""
return dp[(links, rechts)]
# since Alice goes first and initial stack lengths are guaranteed to be the same
# Alice's turn is when the length of the stacks (length between left and right) is equal
alice_turn: bool = (left + right + 1) % 2 == 0
see alice_turn:
# Visit the following scenario: Alice chooses the stack on the left. Calculate Alice's advantage over Bob
left_alice_stones, left_bob_stones = dfs(links + 1, rechts)
left_alice_advantage = (pilas[izquierda] + left_alice_stones) - left_bob_stones
# Visit the following scenario: Alice chooses the correct deck. Calculate Alice's advantage over Bob
right_alice_stones, right_bob_stones = dfs(links, rechts - 1)
right_alice_advantage = (Pilas[right] + right_alice_stones) - right_bob_stones
wenn left_alice_advantage > right_alice_advantage:
# If Alice chooses the left stack, there is a better advantage, Alice chooses the left stack
dp[(links, rechts)] = (stapel[links] + left_alice_stones, left_bob_stones)
Others:
# Otherwise Alice chooses the right one
dp[(left, right)] = (stack[right] + right_alice_stones, right_bob_stones)
Others:
# Visit the following scenario: Bob takes the left stack. calculate the advantage of bob over alice
left_alice_stones, left_bob_stones = dfs(links + 1, rechts)
left_bob_advantage = (pilas[izquierda] + left_bob_stones) - left_alice_stones
# Visit the following scenario: Bob chooses the correct stack. calculate the advantage of bob over alice
right_alice_stones, right_bob_stones = dfs(links, rechts - 1)
right_bob_advantage = (Pilas [rechts] + right_bob_stones) - right_alice_stones
siehe left_bob_advantage > right_bob_advantage:
# If Bob takes the left stack, there is a better advantage, Bob takes the left stack
dp[(izquierda, derecha)] = (left_alice_stones, stacks[left] + left_bob_stones)
Others:
# otherwise, Bob chooses the right one
dp[(izquierda, derecha)] = (right_alice_stones, stacks[right] + right_bob_stones)
# returns the calculated checkers from Alice, Bob for this stack
return dp[(links, rechts)]
# If Alice's ideal checkers are larger than Bob's, return True
alice_stones_optimization, bob_stones_optimization = dfs()
return optima_alice_stones > optima_bob_stones
(Video) Max Number Of K-Sum Pairs- Amazon #Python Interview Questions
That looks great! This time our solution was accepted!
for many[2, 1]
we reduced the number of scenarios from 5 to 4.
for many[7, 2, 1]
we have reduced the number of scenarios from 11 to 6.
for many[3, 7, 2, 1]
we were able to reduce the number of scenarios from 23 to 10.
We can see that our scenarios no longer duplicate each time we add a new batch to the queue!
However, the scenarios with respect to N continue to grow somewhat quadratically.
More specifically, if N is the size of our stacks, we have a time complexity of O(N²)!
Now you will ask clever parrots:
"What about its spatial complexity?"
Looking at our scenario chart, we still have a maximum depth of 5dfs
Function calls in our call stack for stacks[3, 7, 2, 1]
! We use space complexity up to O(N) alone on the call stack!
However, since we need to store the pre-computed results for each pair of start and end indices, we will end up with N² pairs of start and end indices. We will only use the O(N²) space for the cache!
In short, our spatial complexity for this top-down dynamic programming approach will be O(N²).
Time complexity: O(N²)
Spatial Complexity: O(N²)
Now you smart parrots will say:
"Hey! In this top-down approach, we run the scenarios in the graph from top to bottom just to get the solutions to the sub-problems! So we have to reassign the solutions to the subproblems!
We visit the non-leaf nodes twice!
How about we compute the solutions to the subproblems below and work from there? That would save about half of our visits!”
And you are right!
Let's try to calculate the solutions for the smaller stacks first before calculating the solutions for the larger stacks!
We start computing the solution for stacks of size 1, followed by stacks of size 2, stacks of size 3, until we reach stacks of size n, where n is the size of the input stacks.
solution class:
def stoneGame(self, stacks: List[int]) -> bool:
"""
dp represents the pre-computed number of checkers for Alice and Bob, assuming that Alice and Bob play optimally
the key is a tuple of indices; the left pointer to stacks and the right pointer to stacksgiven (0, 2) as key, for stacks = [3, 7, 2, 1]
dp[(0, 2)] represents the number of checkers for Alice and Bob for [3, 7, 2]
since it's your bob's turn to move when the stacks are odd sized...
Bob chooses the ideal move from 3, Alice chooses the ideal move from 7, and Bob makes the last 2
Alice has 7 stones, Bob has 5 stones
pd[(0, 2)] = (7, 5)
"""
num_stacks: int = len(stacks)
dp: list[list[int]] = [[(0, 0)] * num_piles for _ in range(num_piles)]
# fill the dp array from bottom to top
for index in range (num_piles):
"""
If the stacks only have 1 stack, say [7], Bob is next.
Bob takes the last stack, leaving 0 for Alice
"""
dp[Index][Index] = (0, Stacks[Index])
# Start filling stacks from size 2 to stacks from size 3 to final stack size
# looks like an O(N**3) solution but is actually O(N**2)
para stack_size en range(2, num_piles+1):
links = 0
rechts = links + stack_size - 1
while to the right < num_piles:
print(f"left: {left}, right: {right}")
alice_turn: bool = (left + right + 1) % 2 == 0
see alice_turn:
# Visit the following scenario: Alice chooses the stack on the left. Calculate Alice's advantage over Bob
left_alice_stones, left_bob_stones = dp[links + 1][rechts]
left_alice_advantage = (pilas[izquierda] + left_alice_stones) - left_bob_stones
# Visit the following scenario: Alice chooses the correct deck. Calculate Alice's advantage over Bob
right_alice_stones, right_bob_stones = dp[links][rechts - 1]
right_alice_advantage = (Pilas[right] + right_alice_stones) - right_bob_stones
wenn left_alice_advantage > right_alice_advantage:
# If Alice chooses the left stack, there is a better advantage, Alice chooses the left stack
dp[links][rechts] = (stapel[links] + left_alice_stones, left_bob_stones)
Others:
# Otherwise Alice chooses the right one
dp[left][right] = (stack[right] + right_alice_stones, right_bob_stones)
Others:
# Visit the following scenario: Bob takes the left stack. calculate the advantage of bob over alice
left_alice_stones, left_bob_stones = dp[links + 1][rechts]
left_bob_advantage = (pilas[izquierda] + left_bob_stones) - left_alice_stones
# Visit the following scenario: Bob chooses the correct stack. calculate the advantage of bob over alice
right_alice_stones, right_bob_stones = dp[links][rechts - 1]
right_bob_advantage = (Pilas [rechts] + right_bob_stones) - right_alice_stones
siehe left_bob_advantage > right_bob_advantage:
# If Bob takes the left stack, there is a better advantage, Bob takes the left stack
dp[izquierda][derecha] = (left_alice_stones, stacks[left] + left_bob_stones)
Others:
# otherwise, Bob chooses the right one
dp[izquierda][derecha] = (right_alice_stones, stacks[right] + right_bob_stones)
links += 1
right += 1
best_alice_stones, best_bob_stones = dp[0][num_piles - 1]
print(f"best_alice_stones: {best_alice_stones}, best_bob_stones: {best_bob_stones}")
# If the maximum score advantage is greater than 0, Alex wins the game
return optima_alice_stones > optima_bob_stones
The temporal complexity and the spatial complexity are the same as the top-down dynamic programming solution!
Time complexity: O(N²)
Spatial Complexity: O(N²)
Now you will ask clever parrots:
"Why D: We even tried the bottom-up dynamic programming approach and it still performs poorly in terms of runtime and memory!"
Turns out we can return True as the answer!
Since Alice starts first and Alice plays optimally, Alice will always win in the end as long as there is a scenario where Alice can win.
From the scenario diagram we show that Alice will always have a scenario in which she also wins. So it sounds true!
solution class:
def stoneGame(self, stacks: List[int]) -> bool:
return true
Time complexity: O(1)
Spatial Complexity: O(1)
How boring! We could have saved a lot of time by returning True from the start, right?
Fortunately, our efforts to soil our wings were not in vain!
In technical interviews, this question can be modified in so many ways that it doesn't work.
For example, we might be asked to return the exact number of checkers that Alice and Bob would have in the optimal scenario.
Even if our interviewers did that, everything would be fine! We parrots have worked hard and taken advantage of this problem in many ways!
And these are all parrots! I found this question really challenging and interesting and I hope this article was helpful to you!
Until next time!