linyilyi / street-fighter-ai Goto Github PK
View Code? Open in Web Editor NEWThis is an AI agent for Street Fighter II Champion Edition.
License: Apache License 2.0
This is an AI agent for Street Fighter II Champion Edition.
License: Apache License 2.0
有人知道吗
赞你一个,我也对靠ai学习有兴趣,可是不会,今天你的街霸视频我看了三遍,水平低没办法,我也想依靠ai来个爆炸式的效率学习
state file: Champion.Level1.RyuVsGuile.state
model file: ppo_ryu_7000000_steps_updated.zip
哪邊有問題 or 可以優化?
大佬,程序跑起来了。
请问怎么操作才能看到视频中的游戏通关的结尾部分,非常感谢。
这个AI项目真好玩,但对小白来说,运行起来基本就到头了,想要更深入了解这个项目甚至动手改这个项目,太难了。
比如我一开始尝试在test.py里面增加游戏声音,失败了,只能断断续续出点声,都听不出来和游戏有没有关系。
我的显卡是4090,用林哥的训练参数跑不满,我跟chatgpt聊怎么可以最大化训练效率,它告诉我可以调几个参数,调完之后cpu满了,显卡还是比较空闲,再问它,它说有些参数占用cpu,有些参数占用显卡,你自己多试试。对这些参数缺乏基本的认知,瞎蒙的效率就很低。
怎么理解TensorBoard里的曲线和数值,它们理论上可以指导训练参数的设置对吧,看了几个TensorBoard教程,和AI对战项目对不上号。
我还想调整训练的设置,改成每次训练输一场结束或赢两场结束,而不是打完一场就结束,没找到代码位置。
训练之后的结果在哪,如何变成林哥那种ppo_ryu_2500000_steps_updated.zip在test.py里调用,不知道。
代码量不算大,但代码背后的所涉及的知识太多了 T_T
most of PDF files in chats directory cannot be displayed in my computer(ubuntu20.14)
升龙拳就可以解决的问题,为什么一定要躲在墙角过肩摔
请问能扩展到月姬格斗吗?
我用的是Win10,而且是非常老旧的组装机,没有高端显卡。
谢谢林总的精彩分享。
Wrapping the env with a
Monitor` wrapper
Wrapping the env in a DummyVecEnv.
Wrapping the env in a VecTransposeImage.
Fighting Begins!
Traceback (most recent call last):
File "test.py", line 79, in
obs, reward, done, info = env.step(action)
File "D:\Last\street-fighter-ai\main\street_fighter_custom_wrapper.py", line 83, in step
curr_player_health = info['agent_hp']
KeyError: 'agent_hp'`
昨天使用cmd运行成功了 今天想在vscode启动 在输入命令python test.py后报出这个错误
刚看到视频时感觉我CAO这么牛B,之后看了代码竟然这么小!
打开一看。。这AI原来是乱按的啊,根本没有抓画面,所谓的训练只是靠读取双方血条和胜负的内存地址来判断的。
也就是说AI完全靠乱按试出了一个套路可以在这特定存档中打死拜森/贝加,毫无通用性。
不过CHATGPT能写出这段代码感觉已经非常强了。。
另外 有人能告诉我data/data.json里那些内存地址哪来的?
PS 另外这个感觉用在训练马里奥这种没什么随机性的游戏更适合。。
看对话这么多,估计比较贵吧
below is the error message.
C:\Users\xqjco\anaconda3\envs\StreetFighterAI\lib\site-packages\stable_baselines3\common\save_util.py:166: UserWarning: Could not deserialize object observation_space. Consider using
custom_objectsargument to replace this object. Exception: __randomstate_ctor() takes from 0 to 1 positional arguments but 2 were given warnings.warn( Traceback (most recent call last): File "test.py", line 53, in <module> model = PPO.load(os.path.join(MODEL_DIR, MODEL_NAME), env=env) File "C:\Users\xqjco\anaconda3\envs\StreetFighterAI\lib\site-packages\stable_baselines3\common\base_class.py", line 678, in load raise KeyError("The observation_space and action_space were not given, can't verify new environments") KeyError: "The observation_space and action_space were not given, can't verify new environments"
请问你是怎么让他写入代码到github中的?
(StreetFighterAI) D:\street-fighter-ai>cd main
(StreetFighterAI) D:\street-fighter-ai\main>python test.py
Wrapping the env with a Monitor
wrapper
Wrapping the env in a DummyVecEnv.
Wrapping the env in a VecTransposeImage.
Fighting Begins!
怎么办?游戏画面不动,静止的,然后也没有提示报错
运行test.py以后跳出模拟器窗口,显示对战画面但是静止的不会动,控制台没有报错,请问是什么原因啊
Champion.Level12.RyuVsBison.state and other tube cards. Where can I find
希望提供与chat聊天的md版本或者文字版本,图片太大,不好看和翻译
C:\Users\my.conda\envs\StreetFighterAI\lib\site-packages\stable_baselines3\common\save_util.py:166: UserWarning: Could not deserialize object observation_space. Consider using custom_objects
argument to replace this object.
Exception: __randomstate_ctor() takes from 0 to 1 positional arguments but 2 were given
warnings.warn(
Traceback (most recent call last):
File "test.py", line 53, in
model = PPO.load(os.path.join(MODEL_DIR, MODEL_NAME), env=env)
File "C:\Users\my.conda\envs\StreetFighterAI\lib\site-packages\stable_baselines3\common\base_class.py", line 678, in load
raise KeyError("The observation_space and action_space were not given, can't verify new environments")
KeyError: "The observation_space and action_space were not given, can't verify new environments"
以上是报错信息,有大佬可以解答问题在哪里吗?谢谢!
就是对战速度有点慢呢 我没有台式机 在笔记本上搭的环境 AMD Ryzen 7 4800U 1.8GHz
问题怎么一直重复 没有看到通关结局呢 是每次开局都会加载存档么
ChatGPT Plus会员开通与GPT 4使用完整教程:ChatGPT Plus开通指南
Wrapping the env with a `Monitor` wrapper
Wrapping the env in a DummyVecEnv.
Wrapping the env in a VecTransposeImage.
Traceback (most recent call last):
File "D:/Projects/street-fighter-ai/main/test.py", line 78, in <module>
action, _states = model.predict(obs)
File "D:\Anaconda3\envs\street\lib\site-packages\stable_baselines3\common\base_class.py", line 535, in predict
return self.policy.predict(observation, state, episode_start, deterministic)
File "D:\Anaconda3\envs\street\lib\site-packages\stable_baselines3\common\policies.py", line 340, in predict
observation, vectorized_env = self.obs_to_tensor(observation)
File "D:\Anaconda3\envs\street\lib\site-packages\stable_baselines3\common\policies.py", line 255, in obs_to_tensor
vectorized_env = is_vectorized_observation(observation, self.observation_space)
File "D:\Anaconda3\envs\street\lib\site-packages\stable_baselines3\common\utils.py", line 380, in is_vectorized_observation
return is_vec_obs_func(observation, observation_space)
File "D:\Anaconda3\envs\street\lib\site-packages\stable_baselines3\common\utils.py", line 247, in is_vectorized_box_observation
raise ValueError(
ValueError: Error: Unexpected observation shape (112, 160, 3) for Box environment, please use (3, 100, 128) or (n_env, 3, 100, 128) for the observation shape.
Fighting Begins!
运行环境:
Python 3.8.10、Windows 10 家庭中文版
Package Version
----------------------- ---------
absl-py 1.4.0
cachetools 5.3.0
certifi 2022.12.7
charset-normalizer 3.1.0
cloudpickle 2.2.1
contourpy 1.0.7
cycler 0.11.0
filelock 3.12.0
fonttools 4.39.3
google-auth 2.17.3
google-auth-oauthlib 1.0.0
grpcio 1.54.0
gym 0.21.0
gym-retro 0.8.0
idna 3.4
importlib-metadata 4.13.0
importlib-resources 5.12.0
Jinja2 3.1.2
kiwisolver 1.4.4
Markdown 3.4.3
MarkupSafe 2.1.2
matplotlib 3.7.1
mpmath 1.3.0
networkx 3.1
numpy 1.24.2
oauthlib 3.2.2
packaging 23.1
pandas 2.0.0
Pillow 9.5.0
pip 23.0.1
protobuf 4.22.3
pyasn1 0.5.0
pyasn1-modules 0.3.0
pyglet 1.5.27
pyparsing 3.0.9
python-dateutil 2.8.2
pytz 2023.3
requests 2.28.2
requests-oauthlib 1.3.1
rsa 4.9
setuptools 66.0.0
six 1.16.0
stable-baselines3 1.7.0
sympy 1.11.1
tensorboard 2.12.1
tensorboard-data-server 0.7.0
tensorboard-plugin-wit 1.8.1
torch 2.0.0
typing_extensions 4.5.0
tzdata 2023.3
urllib3 1.26.15
Werkzeug 2.2.3
wheel 0.38.4
zipp 3.15.0
Hi,
Thanks for open-sourcing this great project!
I'm wondering if it's possible to support controlling both side of players in the game, the env.step() will take two actions per step, one for each player. Any suggestion on how to support that?
林哥,
Will the 3070's 8GB of RAM meet the minimum training requirements?
I would like to know some hardware requirements when running parallel training in your video.
使用的gym、stable-baseline3库由于setuptools库的问题一直无法安装,即便降低setuptools的版本也没用
Collecting stable-baselines3==1.7.0
Using cached stable_baselines3-1.7.0-py3-none-any.whl (171 kB)
Collecting gym==0.21 (from stable-baselines3==1.7.0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
参考一下gym、setuptools的issue解决,无果:
DLR-RM/stable-baselines3#1324
openai/gym#3176
您好,想学习和chatgpt对话部分.但是,下载完打开发现pdf里面都是空白的,没内容.
可以麻烦重新上传chat部分的pdf吗?
不好意思林哥,我想請問您的data.json是如何來的?
如果使用gym-retro本身的data.json,訓練效果還能這麼好嗎?
(StreetFighterAI) D:\街霸ai\street-fighter-ai\main>python train.py
Using cpu device
Wrapping the env in a VecTransposeImage.
目前的训练是使用cpu来进行的,如何使用GPU来进行?
看了你的GPT对话,GTP一直都没有给出有效方法。林哥最后说他自己搞定了。请问林哥是如何搞定的?
我自己修改了一下代码,让可以2P模式并支持键盘。我希望自己可以操作角色对战训练好的AI。可惜现有的存档都是1P的。我尝试按2P的“START”按钮,也没办法加入2P对战(可能没放币)。
所以应该只有两张方法:
Good work, but I'm worried you linking to the ROM will have negative ramifications for you. Be careful!
如何处理的?没见你gpt对话中,除了替换那个dl文件以外的操作.
我也替换掉了,就是找不到游戏.
我想玩CSGO等射击游戏,我看林哥有之前的视频,可惜小白不会设置,有大佬写一个萌新都能设置的教程嘛
我尝试GPT生成了一个简单的21点游戏,有两个不同的目的1学会最佳策略 2 控制胜负平概率 目前感觉梯度下降有问题,但是没想清楚怎么改 https://note.youdao.com/s/FSCsYQJs
➜ ai /opt/homebrew/anaconda3/bin/conda create -n StreetFighterAI python=3.8.10
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- python=3.8.10
Current channels:
- https://repo.anaconda.com/pkgs/main/osx-arm64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/osx-arm64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
ERROR: Could not find a version that satisfies the requirement gym-retro==0.8.0 (from versions: 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.6.0, 0.7.0, 0.7.1)
ERROR: No matching distribution found for gym-retro==0.8.0
感觉训练的结果还挺厉害的,不知道这个的基本逻辑是什么?运行train.py之后就开始积累训练数据模型了
首先感谢LinYi,提供这样好的一下学习入门的示例。真的是从各个方面,都很平衡:现实,有限,有趣,而且对硬件要求有限,特别是非常有价值,能够体现RL的作用。
以前我是写过编译器的前端,对这些很感兴趣,但没有机会接解AI。所以,这个示例,我打算一定详细学习清楚。发现自己的兴趣还是在AI相关的方向。
现在已经可以运行的单步调试了。环境算是搭好了。
但是我的问题是,想了解下,大家是如何得到、和运行的 Street Fighter II Special Champion Edition 的ROM和运行环境。
如果有详细的说明,请有心人,帮我找一下,如果不涉及到法律相关的,请回复到这个issue下面。如果 有顾虑 , 请发email to me : [email protected]
[email protected]
非常感谢。
(.venv) F:\street-fighter-ai\main>python test.py
Traceback (most recent call last):
File "test.py", line 49, in
env = make_env(game, state="Champion.Level12.RyuVsBison")()
File "test.py", line 38, in init
env = retro.make(
File "F:\street-fighter-ai\main.venv\lib\site-packages\retro_init.py", line 55, in make
return RetroEnv(game, state, inttype=inttype, **kwargs)
File "F:\street-fighter-ai\main.venv\lib\site-packages\retro\retro_env.py", line 60, in init
self.load_state(self.statename, inttype)
File "F:\street-fighter-ai\main.venv\lib\site-packages\retro\retro_env.py", line 271, in load_state
with gzip.open(retro.data.get_file_path(self.gamename, statename, inttype), 'rb') as fh:
File "C:\Users\Datou\AppData\Local\Programs\Python\Python38\lib\gzip.py", line 62, in open
raise TypeError("filename must be a str or bytes object, or a file")
TypeError: filename must be a str or bytes object, or a file
怎么缺少这个文件
能否有声音输出呢,分辨率再高一些。
林哥您好,我想請問在此專案使用的增強式學習,您的損失函數是定義在哪個py檔案呢?
我在street_fighter_custom_wrapper.py中只看到遊戲輸贏時更新custom_reward變數,我想知道您對AI的獎懲演算法的細節。
譬如: 如果成功使出過肩摔這種高難度技巧重創對手,會增加x分...等獎懲策略,如果能有程式碼細節就更好了~
我目前是專攻電腦視覺與深度學習的研究生,很喜歡您的視頻,希望林哥有機會解答我的疑惑~
如果让AI自己学习,恐怕永远也不会发波,升龙,因为不会发波,升龙,也能赢。 有没有可能通过人工干预,让AI会发波,升龙,从而赢的更快?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.