Hey, I've been working with LOCM for the past couple of weeks and I am at the point now where I would like to use my trained agents in the environment. I have subclassed the Agent
abstract class, defined the necessary methods and created the environment as
env = gym.make(
'LOCM-battle-v0', version='1.5',
deck_building_agents=[my_draft_agent, opponent],
battle_agent=opponent,
reward_functions=['win-loss'],
reward_weights=[1.0]
)
Now the problem is that during the initialisation, my_draft_agent
is used for the draft phase but the agent gets state
of the type gym_locm.engine.game_state.State
which is unsuitable for a neural network that is used within the agent. Is there any way I can obtain a numerical representation of the state, such as the one returned by env.step
?
From what I gathered looking at the source code. State
does not have any method that would return the numerical representation of the state. The only place where I found such a method is in the LOCMEnv
class which I, unfortunately, cannot access from the agent during the draft phase, I believe. Is there any other way? Thanks!