shichaojin / note Goto Github PK
View Code? Open in Web Editor NEWNote using issue
Note using issue
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv 9BDB3D89CE49EC21
gpg --export --armor 9BDB3D89CE49EC21 | sudo apt-key add -
apt-get install python-software-properties software-properties-common
docker pull tensorflow/tensorflow:latest-gpu-py3
// docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python
docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-gpu-py3 python
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name tfgpu --ipc=host --network=host -p 6006:6006 -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3 /bin/bash
http://mccipc.ustc.edu.cn/mediawiki/index.php/Docker-images
landslide
https://towardsdatascience.com/deep-learning-with-satellite-data-b78b20708de
https://pmm.nasa.gov/landslides/projects.html
https://github.com/openalea/openalea/wiki/install
https://openalea.readthedocs.io/en/latest/install.html
conda create -n openalea python=2.7
activate openalea
conda install -y -c openalea openalea.sconsx openalea.visualea openalea.components openalea.oalab
conda install -c openalea/label/unstable openalea.plantgl
conda install -y -c openalea openalea.mtg openalea.lpy pyqglviewer
To install plantgl, I successfully used the command as bellow,
conda install -c openalea/label/unstable openalea.plantgl
https://openalea.readthedocs.io/en/latest/tutorials/visualea/beginner.html
conda create -n visualea_tuto
activate visualea_tuto
conda install -y -c openalea openalea.visualea openalea.components openalea.plantgl boost=1.66 -c openalea/label/unstable
上述设置完成之后Pycharm需要重启,否则pycham中可能还有问题
如果还有问题,参考https://blog.csdn.net/bhcgdh/article/details/81357204
https://blog.csdn.net/lq_520/article/details/83856429?fps=1&locationNum=2
cat /etc/issue
https://blog.csdn.net/Bleachswh/article/details/51334661
vim /etc/profile
source /etc/profile
更换下载源(中科大):https://www.linuxidc.com/Linux/2017-01/139319.htm
更换下载源(阿里):https://www.cnblogs.com/fhychzu/p/7542754.html
https://blog.csdn.net/silentwolfyh/article/details/53909518
for i in $(ls *.tar);do tar xvf $i;done
for i in $(ls *.7z);do 7z x $i;done
某文件夹下文件的个数
ls -l |grep "^-"|wc -l
统计某文件夹下目录的个数
ls -l |grep "^d"|wc -l
http://blog.sina.com.cn/s/blog_464f6dba01012vwv.html
https://blog.csdn.net/boonya/article/details/74906927
docker pull jinsc/imagename:tag
https://blog.csdn.net/anxpp/article/details/51810776
docker save -o /home/jinsc/dockertar/ubuntu16py36torch12cuda10.tar ubuntu16py36torch12cuda10:20190914
或者
docker export 7691a814370e > ubuntu.tar
docker load < ubuntu16py36torch12cuda10.tar
可以使用 docker import 从容器快照文件中再导入为镜像,例如
$ cat ubuntu.tar | docker import - test/ubuntu:v1.0
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/ubuntu v1.0 9d37a6082e97 About a minute ago 171.3 MB
https://yeasy.gitbooks.io/docker_practice/container/import_export.html
DeepLearning_ubuntu16.0.4-server.docx
添加Graphic Drivers PPA:
$sudo add-apt-repository ppa:graphics-drivers/ppa
更新Nvidia驱动程序:
$sudo apt-get update
在System Settings -> Software & Updates -> Additional Drivers中选择适配的驱动,点击Apply Changes。
2, 安装cuda(含NVIDIA driver)
https://developer.nvidia.com/cuda-90-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=runfilelocal
sudo chmod a+x cuda_9.0.176_384.81_linux.run
sudo sh cuda_9.0.176_384.81_linux.run
Please make sure that
安装完毕后,再声明一下环境变量,并将其写入到 ~/.bashrc 的尾部:
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
永久添加环境变量https://blog.csdn.net/Bleachswh/article/details/51334661
vim /etc/profile
export PATH=/usr/local/cuda-9.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
3,安装CuDNN, 选择tar.gz压缩包安装
tar -zxf cudnn-9.0-linux-x64-v7.6.2.24.tgz
cd cuda
ls
sudo cp lib64/* /usr/local/cuda/lib64/
sudo cp include/cudnn.h /usr/local/cuda/include/
cd /usr/local/cuda/lib64/
ls
sudo ln -sf /usr/local/cuda/lib64/libcudnn.so.7.6.2 /usr/local/cuda/lib64/libcudnn.so.7
sudo ln -sf /usr/local/cuda/lib64/libcudnn.so.7 /usr/local/cuda/lib64/libcudnn.so
sudo ldconfig
sudo reboot
安装Python环境install Anaconda
sudo bash Anaconda3-2019.07-Linux-x86_64.sh
修改环境变量
sudo gedit /etc/profile
source /etc/profile
conda deactivate
或者
export Anaconda=/anaconda3/bin/
export PATH=$PATH:$Anaconda
source ~/.bashrc
安装PyTorch环境pip install --user torch, torchvision
pip install --user torch torchvision
可安装Tensorflow等其他环境
1. 安装docker,
https://yeasy.gitbooks.io/docker_practice/content/install/ubuntu.html
docker与nvidia-docker的区别
https://blog.csdn.net/u013355826/article/details/89633619
4. 安装nvidia-docker
the following is installing the docker and nvidia-docker for linux-based deep learning
https://blog.csdn.net/junxiacaocao/article/details/79471770
https://github.com/NVIDIA/nvidia-docker
Install官网安装方式
Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
$ sudo service nvidia-docker start
$ sudo nvidia-docker-plugin
Usage
$ docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=UUID-ABCDEF,1'" nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
If not: 非官网安装方式
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker
sudo service nvidia-docker start
sudo nvidia-docker-plugin
5. 安装rancher,拉取镜像使用
https://www.cnblogs.com/gentleman-c/p/7387856.html
1.1 run a rancher
sudo docker run -d --name rancher-server -p 8080:8080 rancher/server
-d表示在后台 –name 为给定的容器的名字 -p端口映射
docker run -it --rm --name ds -p 8888:8888 jupyter/datascience-notebookuna
-it表示交互式的终端 --rm运行玩删除
登录rancher http://10.11.30.151:8080
设置用户名 jinsc 密码 XXXX
#查看公有ip: 159.226.89.150
curl cip.cc 或者 curl ip.cip.cc 或者 curl http://members.3322.org/dyndns/getip
#添加到rancher的主机
nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:1.10.0-gpu
http://(e5cfdd71b9e0 or 127.0.0.1):8888/?token=437d92fda454fe9759e65a9c26f155ea6b2a50b41bac4299
docker start containerID # docker attach containerID
docker volume create my-vol
docker volume inspect my-vol
docker ps 获取目标容器的ID或者容器的名称 // 我这里的是容器ID为52261df2fab6
docker inspect -f'{{.Id}}' 容器的ID // 获取容器的ID全名称
docker inspect -f'{{.Id}}' c996838f46c1 //返回一个长的ID名称
有了这个长长的ID的话,本机和容器之间的文件传输就简单了。
docker cp 本地文件路径 ID全称:容器路径
docker cp /home/jinsc/test.abc c996838f46c1261c4422054e3aeda7e301b7645439d3b850b908223751c68cc4:/workspace
docker cp ID全称:容器文件路径 本地路径
docker cp c996838f46c1261c4422054e3aeda7e301b7645439d3b850b908223751c68cc4:/workspace/fromdocker.file /home/jinsc/workspace/
nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 0.0.0.0:6006:6006 -it tensorflow/tensorflow:latest-gpu bash
nvidia-docker run -it --name PCNN2 --network=host -p 6006:6006 -p 8888:8888 -v /home/jinsc/workspace/:/share ImageID /bin/bash
此时在宿主机本身可通过下面的界面访问,在其他电脑可通过宿主机ip 10.11.30.151:6007查看
在默认情况下Docker是不能访问任何device的, 为了能在Docker里访问显卡,必须加上--privileged=true的选项:
sudo docker run --privileged=true -i -t -v $PWD:/data centos /bin/bash
https://zhuanlan.zhihu.com/p/64493662
nvidia-docker run -it -v /home/jinsc/workspace/:/share --name ubuntu16cuda10torch12 --network=host -p 6006:6006 -p 8888:8888 anibali/pytorch:cuda-10.0 /bin/bash
docker run --gpus all -it -v /home/jinsc/workspace/:/share --name ubuntu16cuda10torch12 --network=host -p 6006:6006 -p 8888:8888 anibali/pytorch:cuda-10.0 /bin/bash
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name superpointgraph --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:superpointgraph /bin/bash
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name pytorchgeometric --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:pytorchgeometric /bin/bash
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name pytorchgeometric --ipc=host --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:pytorchgeometricVIM /bin/bash
docker run --gpus all -it -v /home/jinsc/workspace/:/share -v /home/guolab/WIN_Z/jinscZ:/jinscZ --name superpointgraph2 --ipc=host --network=host -p 6006:6006 -p 8888:8888 jinsc/ubuntu16cuda9cudnn7py37torch11:superpointgraphVIM /bin/bash
$ docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
http://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html
https://www.youtube.com/watch?v=sB07YOwW0SY
docker search ubuntu
docker pull ubuntu:16.04
docker images
docker image ls
docker run -it --name ubuntu16py3tf16 ubuntu:16.04
或者 (默认启动bash)
docker run -it --name ubuntu16py3tf16 ubuntu:16.04 bash
或者加一个 --provileged 让docker实现root权限
docker run -it --name ubuntu16py3tf16 --privileged ubuntu:16.04 bash
nvidia-docker run -it -d -v /home/jinsc/workspace/:/root/workspace --name nvidiatf ubuntu:16.04
apt-get update
apt-get install libssl-dev
apt-get install make
apt-get install gcc
apt-get install wget
mkdir tools
cd tools
wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tgz
tar -zxvf Python-3.6.0.tgz
cd Python-3.6.0
./configure
make
make install
apt-get install python-pip python-dev
pip3 install tensorflow-gpu
pip3 install tensorflow-gpu==1.6 指定tensorflow版本
python3
import tensorflow as tf
tf._ version _
apt-get install git
apt-get install vim
docker commit containerID
docker images 查看镜像
docker save 镜像名 > /tmp/givename.tar
docker load < /tmp/givename.tar
docker container ls
docker container ls -a
docker ps
docker ps -a
docker start containerID
docker attach containerID
docker exec containerID bash
docker cp nvidiatf:/workspace/hello.py /home/jinsc/workspace/
docker cp 容器名:/容器内路径/文件 /宿主机目标路径/
docker image rm XXX
docker container stop XX docker container rm XXX
https://blog.csdn.net/CSDN_duomaomao/article/details/78587103
sudo docker run -d --name rancher-server -p 8080:8080 rancher/server
查看共有ip
curl cip.cc 或者 curl ip.cip.cc 或者 curl http://members.3322.org/dyndns/getip
https://www.tensorflow.org/install/docker?hl=zh-cn
docker run -it -d -rm --name XX -v ~/hostdir:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tesnsorflow
docker exec -it tf tensorborad --logdir tf_logs/
参数的意义:
-it 交互式终端
-rm 容器停止后就自动删除
-d 参数表示后台守护进程运行容器
--name 参数表示容器的名称,可随意取
-v 表示主机和容器共享文件的映射,容器的目录就是 Dockerfile 中用 VOLUME 命令定义的目录
-p 表器主机和容器端口映射,容器的端口就是 Dockerfile 中用 EXPOSE 命令绑定的端口
docker exec -it 83fd3bb36025 bash
相比于 docker attach containerID 进入默认的命令, docker exec更灵活
1.下载window服务器的镜像ios, 利用UltraISO 或者 rufus 软件制作启动优盘;插入到服务器,重启过程按键盘上有点的那个键del,或者F2,F11或者其他的键(根据自己的电脑牌子主板查一下)进入开机启动界面。进入之后更换开机启动项为从优盘启动,启动之后进入重装系统界面。
windows2016server64bit.iso 链接:https://pan.baidu.com/s/1yenJ71THEwShPxxLYWzlpw 提取码:1m7l
windows2019server64bit.iso 链接:https://pan.baidu.com/s/1O0KOSF8GUAdoZY_fTXnRhQ 提取码:nqm3
windows2016--2019镜像说明: 链接:https://pan.baidu.com/s/1SljjynVfqb5x1TkH35uVRw 提取码:5scb
**2. 安装nvidia 驱动,**下列链接选择对应平台的驱动【安装过程电脑会重启几次,提前保存其他程序文件】
https://www.nvidia.com/Download/index.aspx
#**3. 安装cuda,**下列链接选择对应平台的驱动
https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64
#--------------Cudnn可选安装,
CuDNN 下载地址,需要注册登录,可以用微信等登录;https://developer.nvidia.com/rdp/form/cudnn-download-survey
#-------visualstudio2015(c++)版本也可选安装
4. 安装Anoconda3 版本的python https://www.anaconda.com/distribution/, 安装目录可以选择在D:\anaconda3, 放在某一个盘的根目录下比较方便;
#5. 在pytorch官网选择对应的平台,利用pip install torch 和torchvision 【截图中两行命令,pip3 用pip; pip的位置是在D:\anaconda3\scripts\pip.exe】
安装完成之后,在python中运行,
import torch
print(torch.cuda.is_available()) #如果返回True,说明安装GPU版本的torch成功了
#6.配置editplus,
https://thecustomizewindows.com/2017/07/install-anaconda-ubuntu-16-04/
94 apt update
95 apt upgrade
96 ls
97 cd tmp/
98 ls
99 curl -O https://repo.continuum.io/archive/Anaconda3-4.4.0-Linux-x86_64.sh
100 chmod +x Anaconda3-4.4.0-Linux-x86_64.sh
101 bash Anaconda3-4.4.0-Linux-x86_64.sh
102 source ~/.bashrc
https://blog.csdn.net/ydyang1126/article/details/77247654
import matplotlib as mpl
mpl.use('Agg')
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(0, 3, 100)
y=x
plt.plot(x,y)
[<matplotlib.lines.Line2D object at 0x7fadda5e8358>]
plt.savefig('3.png')
apt update
apt install vim
apt install curl vim exuberant-ctags git ack-grep
pip install pep8 flake8 pyflakes isort yapf
git clone https://github.com/fisadev/fisa-vim-config
cd fisa-vim-config/ # 隐藏文件,ls -a
cp .vimrc ~/.vimrc
加上F5执行命令 https://stackoverflow.com/questions/18948491/running-python-code-in-vim
编辑查看复制
map :exec '!python' shellescape(@%, 1)
安装需要的插件
将想要安装的插件,按照地址填写方法,将地址填写在vundle#begin和vundle#end之间就可以
保存之后,有两种方法安装插件。
(1) 运行 vim ,再运行 :PluginInstall
(2) 通过命令行直接安装 vim +PluginInstall +qall
移除不需要的插件
编辑.vimrc文件移除的你要移除的插件所对应的plugin那一行。
保存退出当前的vim
重新打开vim,输入命令BundleClean。
其他常用命令
更新插件BundleUpdate
列出所有插件BundleList
查找插件BundleSearch
touch a.py
vim a.py
查找“cmake clean”命令以清除cmake输出
时间: 2017-03-29 08:13:49.0标签: cmake
译文: 来源翻译纠错
正如make clean删除makefile生成的所有文件,我想对CMake做同样的。我经常发现自己手动通过目录删除像cmake_install.cmake和CMakeCache.txt文件和CMakeFiles文件夹。
rm -rf CMakeCache.txt cmake_install.cmake CMakeFiles/
CONDAENV=/root/anaconda3 #YOUR_CONDA_ENVIRONMENT_LOCATION
cd partition/ply_c
cmake . -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.7m.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.7m -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make
cd ..
cd cut-pursuit
mkdir build
cd build
cmake .. -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.7m.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.7m -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make
SEMA3D_DIR=/jinscZ/superpoint_graph/learning/datasets/semantic3d
python partition/partition.py --dataset sema3d --ROOT_PATH $SEMA3D_DIR --voxel_width 0.05 --reg_strength 0.8 --ver_batch 5000000
python learning/sema3d_dataset.py --SEMA3D_PATH $SEMA3D_DIR
https://jupyterlab.readthedocs.io/en/stable/user/extensions.html
(base) > jupyter-lab
然后浏览器会弹出来
jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install jupyter-matplotlib
https://www.cnblogs.com/qianzf/p/10620407.html
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64
conda config --add channels https://conda.anaconda.org/openalea/noarch
conda config --add channels https://conda.anaconda.org/openalea/win-64
conda config --remove channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
conda config --show-sources
conda info
anaconda login 用户名jinsc 密码F--A3
anaconda logout
##conda 创建虚拟环境
conda create -n envname python=3.6
conda install pyntcloud -c conda-forge
conda list
conda install python=3.6
conda install pytorch=0.4.1 cuda90 -c pytorch
https://pytorch.org/get-started/previous-versions/
在默认情况下Docker是不能访问任何device的, 为了能在Docker里访问显卡,必须加上--privileged=true的选项:
sudo docker run --privileged=true -i -t -v $PWD:/data centos /bin/bash
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.