Giter VIP home page Giter VIP logo

Comments (8)

Kyushik avatar Kyushik commented on August 17, 2024

편하게 질문주셔도 괜찮습니다! :) 음.. 그렇게 넣으셔도 학습에 지장은 없습니다. 다수의 에이전트를 사용하는 경우에는 매 에피소드를 그렇게 작성하신대로 reset을 사용하여 환경을 초기화해줘야하기도 합니다 ㅎㅎ 지금처럼 single agent를 사용하시는 경우에는 그렇게 작성하시면 Done 때문에 환경이 초기화되고 해당 코드 때문에 바로 한번 더 초기화되면서 환경 초기화가 2번 될 가능성도 있을 것 같네요! 그 부분은 한번 체크해보시면 좋을 듯 합니다.

from unity_ml_agents.

woongdam avatar woongdam commented on August 17, 2024

음.. 네! 감사합니다. 또 한개 더 질문이 있는데.. 3d GridWorld를 학습을 시키면 epsilon 값이 0.01정도까지 되면 에이전트가 장애물이나 벽으로 가서 아무런 행동을 안하게 되는 상황이 벌어집니다.

전에 알려주신대로 벽이나 장애물로 가면 아무런 행동을 못하게 했는데도 잘안되네요...ㅠㅠ 무슨 문제가 있을까요

전에는 그래서 아래
if (!maskActions)
{
//setReward(-3f);
}
...
else
{
//setReward(-3f);
}
와 같이 벽이나 장애물로 가면 -reward를 준적도 있습니다.

아래는 코드입니다.

public override void AgentAction(float[] vectorAction, string textAction)
   {
       int maxPosition = academy.gridSize;
       int minPosition = -1;
       var positionX = (int)transform.position.x;
       var positionY = (int)transform.position.y;
       var positionZ = (int)transform.position.z;
      

       int action = Mathf.FloorToInt(vectorAction[0]);

       Vector3 targetPos = transform.position;

       switch (action)
       {
           //case NoAction:
           //    // do nothing
           //    break;
           case Forward:
               targetPos = transform.position + new Vector3(0f, 0, 1f);
               if (targetPos.z == maxPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }

               break;
           case Backward:
               targetPos = transform.position + new Vector3(0f, 0, -1f);
               if (targetPos.z == minPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }
               break;
           case Right:
               targetPos = transform.position + new Vector3(1f, 0, 0);
               if (targetPos.x == maxPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }
               break;
           case Left:
               targetPos = transform.position + new Vector3(-1f, 0, 0f);
               if (targetPos.x == minPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }
               break;
           case Up:

               targetPos = transform.position + new Vector3(0f, 1f, 0f);
               if (targetPos.y == maxPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }
               break;
           case Down:
               targetPos = transform.position + new Vector3(0f, -1f, 0f);
               if (targetPos.y == minPosition)
               {
                   targetPos = transform.position;
                   maskActions = false;
               }
               break;

           default:
               throw new ArgumentException("Invalid action value");
       }

       AddReward(-1);
       if (!maskActions)
       {
              **//setReward(-3f);**
       }
       else
       {
           preDist = (goal.transform.position - gameObject.transform.position).magnitude;
           Vector3 prevec = gameObject.transform.position;
           
           Collider[] blockTest = Physics.OverlapBox(targetPos, new Vector3(0.3f, 0.3f, 0.3f));
           if (blockTest.Where(col => col.gameObject.CompareTag("obstacle")).ToArray().Length == 0)
           {

               if (blockTest.Where(col => col.gameObject.CompareTag("goal")).ToArray().Length == 1)
               {
                   Done();
                   SetReward(100f);

               }

               else
               {
                   transform.position = targetPos;
                  
                   curDist = (goal.transform.position - gameObject.transform.position).magnitude;
                   if (curDist < preDist)
                   {
                       var reward =Math.Abs(curDist - preDist);
                       AddReward(reward + 1);
                   }
                   else
                   {
                       var reward = Math.Abs(curDist - preDist);
                       AddReward(-reward -1 );
                   }

                   if (isA)
                   {

                       Vector3 lastvec = gameObject.transform.position;
                       if ((lastvec.x != pprevec.x && lastvec.y == pprevec.y && lastvec.z == pprevec.z) || (lastvec.x == pprevec.x && lastvec.y != pprevec.y && lastvec.z == pprevec.z) || (lastvec.x == pprevec.x && lastvec.y == pprevec.y && lastvec.z != pprevec.z))
                       {
                           AddReward(1f);
                       }
                       else if (pprevec == lastvec)
                       {
                           AddReward(-1f);
                       }
                       else
                       {
                           AddReward(-1f);
                       }


                   }
                   isTwise = true;
                   if (isTwise)
                   {
                       pprevec = prevec;
                   }


                   isA = true;
               }
           }
           else
           {
               **//setReward(-3f);**
           }
          
       }
       if(GetReward() ==0 )
       {

       }
       double qwq = GetReward();
       Debug.Log(GetReward());
       maskActions = true;

   }

코드상에 문제가 있나요ㅠㅠ

from unity_ml_agents.

Kyushik avatar Kyushik commented on August 17, 2024

저도 GridWorld 류의 게임을 테스트해볼 때 DQN 계열의 알고리즘을 쓰면 가끔 그런일이 생기는 경향이 있는걸 확인했었습니다. PPO 같은 OnPolicy 계열 알고리즘들은 이런 문제가 없던데 좀 기묘한 부분인 것 같긴합니다. 일단 벽으로 이동하는데 패널티를 조금 더 주시거나 discount factor (gamma) 값을 조금 더 낮춰보시거나 학습을 더 많이 돌려보시는걸 추천드립니다!

from unity_ml_agents.

woongdam avatar woongdam commented on August 17, 2024

넵 한번 다시해보고 조언좀 구하겠습니다ㅠㅠ
감사합니다ㅠㅠ

from unity_ml_agents.

woongdam avatar woongdam commented on August 17, 2024

Traceback (most recent call last): File "c:\users\youngsukim\anaconda3\envs\rein\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\youngsukim\anaconda3\envs\rein\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\youngsukim\anaconda3\envs\rein\Scripts\mlagents-learn.exe\__main__.py", line 7, in <module> File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\learn.py", line 262, in main run_training(0, run_seed, options, Queue()) File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\learn.py", line 95, in run_training tc.start_learning(env, trainer_config) File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\trainer_controller.py", line 218, in start_learning new_info = self.take_step(env, curr_info) File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in take_step trainer.update_policy() File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\ppo\trainer.py", line 334, in update_policy run_out = self.policy.update(buffer.make_mini_batch(start, end), n_sequences) File "c:\users\youngsukim\anaconda3\envs\rein\lib\site-packages\mlagents\trainers\buffer.py", line 197, in make_mini_batch mini_batch[key] = np.array(self[key][start:end]) ValueError: could not broadcast input array from shape (65) into shape (1) UnityEnvironment worker: keyboard interrupt Error in atexit._run_exitfuncs: Traceback (most recent call last): File "c:\users\youngsukim\anaconda3\envs\rein\lib\multiprocessing\util.py", line 319, in _exit_function p.join() File "c:\users\youngsukim\anaconda3\envs\rein\lib\multiprocessing\process.py", line 124, in join res = self._popen.wait(timeout) File "c:\users\youngsukim\anaconda3\envs\rein\lib\multiprocessing\popen_spawn_win32.py", line 80, in wait res = _winapi.WaitForSingleObject(int(self._handle), msecs) KeyboardInterrupt

내장되어 있는 ppo 알고리즘을 써보려고 하니.. 10000번정도 step 뒤에 이런 에러가 나타납니다ㅠ

이런 에러는 왜 뜨는 건가요?

from unity_ml_agents.

Kyushik avatar Kyushik commented on August 17, 2024

흠... 이 부분은 저도 처음 보는 상황이네요 ㅜㅜ 한번 다른 저자분들과 공유해보고 공유드리겠습니다!

from unity_ml_agents.

woongdam avatar woongdam commented on August 17, 2024

저자님!!! 다시 지우고 설치하니 잘 됩니다!!!

다시 3D Gridworld 해결하고 막히는 부분있으면 연락드리겠습니다ㅠㅠ

항상 좋은 조언 감사드립니다 :)))

from unity_ml_agents.

Kyushik avatar Kyushik commented on August 17, 2024

해결되셨다니 다행입니다! 또 진행하시면서 문의사항 있으시면 이슈 날려주세요! :)

from unity_ml_agents.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.