DNDS
Search…
✨Tips and tricks
Here are some tips and tricks for creating your bot

📏 The Manhattan Distance

The Manhattan Distance is a useful way to calculate distances on a grid-based map such as this one:
Source: https://iq.opengenus.org/euclidean-vs-manhattan-vs-chebyshev-distance/ Here's our implementation of the Manhattan Distance, which you are free to use in your code:
1
# returns the manhattan distance between two tiles, calculated as:
2
# |x1 - x2| + |y1 - y2|
3
def manhattan_distance(self, start, end):
4
5
distance = abs(start[0] - end[0]) + abs(start[1] - end[1])
6
7
return distance
Copied!

🗺️ Printing the full game map

In case you're looking to visualise the entire game map in some form (e.g. as an array), here's an example implementation:
1
# make sure you've imported numpy at the start of your script (import numpy as np)
2
3
# print the game map as an array using numpy
4
def print_map(self, game_state):
5
6
# note the y-index will be flipped
7
# since numpy arrays index from top left as (0,0)
8
# whereas our map follows a cartesian coordinate system with bottom left as (0,0)
9
10
cols = game_state.size[0]
11
rows = game_state.size[1]
12
13
game_map = np.zeros((rows, cols)).astype(str)
14
15
for x in range(cols):
16
print(f"x: {x}")
17
for y in range(rows):
18
entity = game_state.entity_at((x,y))
19
20
if entity is not None:
21
game_map[y][x] = entity
22
else:
23
game_map[y][x] = 9 # using 9 here as 'free' since 0 = Player 1
24
25
return game_map
26
27
# thank you: @ifitaintbroke for providing a fixed version
Copied!

🎲 Checking whether game_state has been updated (Dealing with asynchrony)

Due to a limitation in the way Agents interact with the game environment, there may be a lag in the update of the game_state your Agent receives on each 'tick'. For example, when an Agent produces an action, the effects of this action may not become observable in the next 'tick', but will be updated in the following 'tick' after that.
As a workaround, you can pre-plan and store your Agent's action, e.g.:
self.planned_actions.append('p')
Then have your Agent choose the next planned action (e.g. using action = planned_actions.pop()) after checking whether the game state has updated.
One method is to check whether the current game_state.tick_number is greater than the previous tick_number by a certain delay threshold (e.g. one or two ticks) before sending your next move.
Another method is using game_state.entity_at(my_location) to check whether your action has been executed (e.g. if game_state.entity_at(my_location) = 'p', then you know your Agent has successfully placed its bomb).

🕹️ Storing and planning moves

Since your Agent is a class object, instead of returning one action at a time, you can also store and pre-plan a list of moves in one go. As the game executes, you can then tell it to choose from your pre-planned set of moves instead of having to process a new move.
For example to store a move: self.planned_actions.append('b')
Then to use your planned action:
1
if self.planned_actions:
2
# if we have actions stored, we'll execute this first
3
action = self.planned_actions.pop()
4
else:
5
# do stuff here to plan your next action
Copied!
This can be useful to help you navigate to specific objects across the Game map, or in a workaround to any game_state update syncing issues (see Tips and Tricks note above).

📜 Additional Readings

Monte Carlo Tree Search A Python implementation
Introduction to Monte Carlo Tree Search - Jeff Bradberry
Reinforcement Learning: An Introduction Sutton & Barto (2017)
http://incompleteideas.net/book/bookdraft2017nov5.pdf
Our own Tabular Q-Learning Tutorial (using OpenAI Gym's Taxi)
Tutorial: An Introduction to Reinforcement Learning Using...
Last modified 2mo ago