Giter VIP home page Giter VIP logo

course22-web's Introduction

Practical Deep Learning for Coders

Welcome to Practical Deep Learning for Coders. This web site covers the book and the 2022 version of the course, which are designed to work closely together. If you haven’t yet got the book, you can buy it here. It’s also freely available as interactive Jupyter Notebooks; read on to learn how to access them..

How do I get started?

If you’re ready to dive in right now, here’s how to get started. If you want to know more about this course, read the next sections, and then come back here.

To watch the videos, click on the Lessons section in the navigation sidebar. The lessons all have searchable transcripts; click “Transcript Search” in the top right panel to search for a word or phrase, and then click it to jump straight to video at the time that appears in the transcript. The videos are all captioned; while watching the video click the “CC” button to turn them on and off.

Each video to designed to go with various chapters from the book. The entirety of every chapter of the book is available as an interactive Jupyter Notebook. Jupyter Notebook is the most popular tool for doing data science in Python, for good reason. It is powerful, flexible, and easy to use. We think you will love it! Since the most important thing for learning deep learning is writing code and experimenting, it’s important that you have a great platform for experimenting with code.

In the course we mainly use Kaggle Notebooks and Paperspace Gradient because we’ve found they work really well for this course, and have good free options. We also will do some parts of the course on your own laptop. (If you don’t have a Paperspace account yet, sign up with this link to get $10 credit – and we get a credit too.) We strongly suggest not using your own computer for training models in this course, unless you’re very experienced with Linux system adminstration and handling GPU drivers, CUDA, and so forth.

If you need help, there’s a wonderful online community ready to help you at forums.fast.ai. Before asking a question on the forums, search carefully to see if your question has been answered before. (The forum system won’t let you post until you’ve spent a few minutes on the site reading existing topics.)

Is this course for me?

Thank you for letting us join you on your deep learning journey, however far along that you may be! Previous fast.ai courses have been studied by hundreds of thousands of students, from all walks of life, from all parts of the world. Many students have told us about how they’ve become multiple gold medal winners of international machine learning competitions, received offers from top companies, and having research papers published. For instance, Isaac Dimitrovsky told us that he had “been playing around with ML for a couple of years without really grokking it… [then] went through the fast.ai part 1 course late last year, and it clicked for me”. He went on to achieve first place in the prestigious international RA2-DREAM Challenge competition! He developed a multistage deep learning method for scoring radiographic hand and foot joint damage in rheumatoid arthritis, taking advantage of the fastai library.

It doesn’t matter if you don’t come from a technical or a mathematical background (though it’s okay if you do too!); we wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course. The first three chapters have been explicitly written in a way that will allow executives, product managers, etc. to understand the most important things they’ll need to know about deep learning – if that’s you, just skip over the code in those sections.

Deep learning is a computer technique to extract and transform data–-with use cases ranging from human speech recognition to animal imagery classification–-by using multiple layers of neural networks. A lot of people assume that you need all kinds of hard-to-find stuff to get great results with deep learning, but as you’ll see in this course, those people are wrong. Here’s a few things you absolutely don’t need to do world-class deep learning:

Myth (don’t need) Truth
Lots of math Just high school math is sufficient
Lots of data We’ve seen record-breaking results with <50 items of data
Lots of expensive computers You can get what you need for state of the art work for free

Deep learning has power, flexibility, and simplicity. That’s why we believe it should be applied across many disciplines. These include the social and physical sciences, the arts, medicine, finance, scientific research, and many more. Here’s a list of some of the thousands of tasks in different areas at which deep learning, or methods heavily using deep learning, is now the best in the world:

  • Natural language processing (NLP) Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a concept
  • Computer vision Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehicles
  • Medicine Finding anomalies in radiology images, including CT, MRI, and X-ray images; counting features in pathology slides; measuring features in ultrasounds; diagnosing diabetic retinopathy
  • Biology Folding proteins; classifying proteins; many genomics tasks, such as tumor-normal sequencing and classifying clinically actionable genetic mutations; cell classification; analyzing protein/protein interactions
  • Image generation Colorizing images; increasing image resolution; removing noise from images; converting images to art in the style of famous artists
  • Recommendation systems Web search; product recommendations; home page layout
  • Playing games Chess, Go, most Atari video games, and many real-time strategy games
  • Robotics Handling objects that are challenging to locate (e.g., transparent, shiny, lacking texture) or hard to pick up
  • Other applications Financial and logistical forecasting, text to speech, and much more…

Your teacher

I am Jeremy Howard, your guide on this journey. I lead the development of fastai, the software that you’ll be using throughout this course.

I have been using and teaching machine learning for around 30 years. I started using neural networks 25 years ago. During this time, I have led many companies and projects that have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of President and Chief Scientist of the world’s largest machine learning community, Kaggle. I am the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization behind this course.

At fast.ai we care a lot about teaching. In this course, I start by showing how to use a complete, working, very usable, state-of-the-art deep learning network to solve real-world problems, using simple, expressive tools. And then we gradually dig deeper and deeper into understanding how those tools are made, and how the tools that make those tools are made, and so on… We always teaching through examples. We ensure that there is a context and a purpose that you can understand intuitively, rather than starting with algebraic symbol manipulation.

The software you will be using

In this course, you’ll be using PyTorch, fastai, Hugging Face Transformers, and Gradio.

We’ve completed hundreds of machine learning projects using dozens of different packages, and many different programming languages. At fast.ai, we have written courses using most of the main deep learning and machine learning packages used today. We spent over a thousand hours testing PyTorch before deciding that we would use it for future courses, software development, and research. PyTorch is now the world’s fastest-growing deep learning library and is already used for most research papers at top conferences.

PyTorch works best as a low-level foundation library, providing the basic operations for higher-level functionality. The fastai library one of the most popular libraries for adding this higher-level functionality on top of PyTorch. In this course, as we go deeper and deeper into the foundations of deep learning, we will also go deeper and deeper into the layers of fastai.

Transformers is a popular library focused on natural language processing (NLP) using transformers models. In the course you’ll see how to create a cutting-edge transfomers model using this library to detect similar concepts in patent applications.

What you will learn

After finishing this course you will know:

  • How to train models that achieve state-of-the-art results in:
    • Computer vision, including image classification (e.g., classifying pet photos by breed), and image localization and detection (e.g., finding where the animals in an image are)
    • Natural language processing (NLP), including document classification (e.g., movie review sentiment analysis) and phrase similarity
    • Tabular data with categorical data, continuous data, and mixed data
    • Collaborative filtering (e.g., movie recommendation)
  • How to turn your models into web applications, and deploy them
  • Why and how deep learning models work, and how to use that knowledge to improve the accuracy, speed, and reliability of your models
  • The latest deep learning techniques that really matter in practice
  • How to implement stochastic gradient descent and a complete training loop from scratch

Here are some of the techniques covered (don’t worry if none of these words mean anything to you yet–you’ll learn them all soon):

  • Random forests and gradient boosting
  • Affine functions and nonlinearities
  • Parameters and activations
  • Transfer learning
  • Stochastic gradient descent (SGD)
  • Data augmentation
  • Weight decay
  • Image classification
  • Entity and word embeddings
  • And much more

course22-web's People

Contributors

childsev avatar hamelsmu avatar imflash217 avatar jph00 avatar kelvinsjk avatar lotharschulz avatar v-gar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

course22-web's Issues

CoLab doesn't display text inside angle brackets in notebook

Opening notebook in Google CoLab for textbook chapters, the markdown text has angle brackets that are empty.
For example: "...We'll consider this topic more in detail in <>." - 01_intro.ipynb
I can only see the content of <> when I double click the markdown cell.
Can you guys fix this, it is kinda annoying to keep having to double click markdown cells.

Examples dont work with current python/ksggle

Doing the bird sample from lesson 1 (on kaggle), at the stage where it's installing fastai , I get this:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed. explainable-ai-sdk 1.3.2 requires xai-image-widget, which is not installed. tensorflow 2.6.2 requires numpy~=1.19.2, but you have numpy 1.20.3 which is incompatible. tensorflow 2.6.2 requires six~=1.15.0, but you have six 1.16.0 which is incompatible. tensorflow 2.6.2 requires typing-extensions~=3.7.4, but you have typing-extensions 3.10.0.2 which is incompatible. tensorflow 2.6.2 requires wrapt~=1.12.1, but you have wrapt 1.13.3 which is incompatible. tensorflow-transform 1.5.0 requires absl-py<0.13,>=0.9, but you have absl-py 0.15.0 which is incompatible. tensorflow-transform 1.5.0 requires numpy<1.20,>=1.16, but you have numpy 1.20.3 which is incompatible. tensorflow-transform 1.5.0 requires pyarrow<6,>=1, but you have pyarrow 6.0.1 which is incompatible. tensorflow-transform 1.5.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<2.8,>=1.15.2, but you have tensorflow 2.6.2 which is incompatible. tensorflow-serving-api 2.7.0 requires tensorflow<3,>=2.7.0, but you have tensorflow 2.6.2 which is incompatible. flake8 4.0.1 requires importlib-metadata<4.3; python_version < "3.8", but you have importlib-metadata 4.11.3 which is incompatible. apache-beam 2.34.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.4 which is incompatible. apache-beam 2.34.0 requires httplib2<0.20.0,>=0.8, but you have httplib2 0.20.2 which is incompatible. apache-beam 2.34.0 requires pyarrow<6.0.0,>=0.15.1, but you have pyarrow 6.0.1 which is incompatible. aioitertools 0.10.0 requires typing_extensions>=4.0; python_version < "3.10", but you have typing-extensions 3.10.0.2 which is incompatible. aiobotocore 2.1.2 requires botocore<1.23.25,>=1.23.24, but you have botocore 1.24.20 which is incompatible.
I've tried pre-installing some packages, and downgrading other packages, but whatever I try just gives a longer list of errors.

persistent HTTPError from search_images('bird photos')

I'm running the Is it a bird? notebook (first class) and getting HTTPError when searching bird photos in Step 1: Download images of birds and non-birds. The cell recommends retrying upon a JSON error, though retries are not helping for the past couple days. Traceback follows:

---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
/tmp/ipykernel_17/2432147335.py in <module>
      1 #NB: `search_images` depends on duckduckgo.com, which doesn't always return correct responses.
      2 #    If you get a JSON error, just try running it again (it may take a couple of tries).
----> 3 urls = search_images('bird photos', max_images=1)
      4 urls[0]

/tmp/ipykernel_17/1717929076.py in search_images(term, max_images)
      4 def search_images(term, max_images=30):
      5     print(f"Searching for '{term}'")
----> 6     return L(ddg_images(term, max_results=max_images)).itemgot('image')

/opt/conda/lib/python3.7/site-packages/duckduckgo_search/compat.py in ddg_images(keywords, region, safesearch, time, size, color, type_image, layout, license_image, max_results, page, output, download)
     80         type_image=type_image,
     81         layout=layout,
---> 82         license_image=license_image,
     83     ):
     84         results.append(r)

/opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in images(self, keywords, region, safesearch, timelimit, size, color, type_image, layout, license_image)
    403         assert keywords, "keywords is mandatory"
    404 
--> 405         vqd = self._get_vqd(keywords)
    406         assert vqd, "error in getting vqd"
    407 

/opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_vqd(self, keywords)
     93     def _get_vqd(self, keywords: str) -> Optional[str]:
     94         """Get vqd value for a search query."""
---> 95         resp = self._get_url("POST", "https://duckduckgo.com/", data={"q": keywords})
     96         if resp:
     97             for c1, c2 in (

/opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_url(self, method, url, **kwargs)
     87                 logger.warning(f"_get_url() {url} {type(ex).__name__} {ex}")
     88                 if i >= 2 or "418" in str(ex):
---> 89                     raise ex
     90             sleep(3)
     91         return None

/opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_url(self, method, url, **kwargs)
     80                 )
     81                 if self._is_500_in_url(str(resp.url)) or resp.status_code == 202:
---> 82                     raise httpx._exceptions.HTTPError("")
     83                 resp.raise_for_status()
     84                 if resp.status_code == 200:

HTTPError:

Problem with ClassificationInterpretation

I'm getting an error using ClassificationInterpretation the way described in the book and video.

Code:
interp = ClassificationInterpretation(learn)
interp.plot_confusion_matrix()

Error:
"TypeError: ClassificationInterpretation.init() missing 2 required positional arguments: 'dl' and 'losses'"

Suggest put/update a link to the HuggingFace/jph00/testing

Hello fastai team,

While I am following along the class I can not find a direct link to the testing repo on huggingface demonstrated in middle of the class, which is really helpful if a student wants to create a same demo on his/her own. So I would suggest to add/update a link in the course page. Thank you so much!

Tung-Lin Wu

ERROR: pip's dependency resolver

When executing step #2 of the "Is it a bird?" notebook, I get this error:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed. explainable-ai-sdk 1.3.2 requires xai-image-widget, which is not installed. dask-cudf 21.10.1 requires cupy-cuda114, which is not installed. tensorflow 2.6.2 requires numpy~=1.19.2, but you have numpy 1.20.3 which is incompatible. tensorflow 2.6.2 requires six~=1.15.0, but you have six 1.16.0 which is incompatible. tensorflow 2.6.2 requires typing-extensions~=3.7.4, but you have typing-extensions 3.10.0.2 which is incompatible. tensorflow 2.6.2 requires wrapt~=1.12.1, but you have wrapt 1.13.3 which is incompatible. tensorflow-transform 1.5.0 requires absl-py<0.13,>=0.9, but you have absl-py 0.15.0 which is incompatible. tensorflow-transform 1.5.0 requires numpy<1.20,>=1.16, but you have numpy 1.20.3 which is incompatible. tensorflow-transform 1.5.0 requires pyarrow<6,>=1, but you have pyarrow 6.0.1 which is incompatible. tensorflow-transform 1.5.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<2.8,>=1.15.2, but you have tensorflow 2.6.2 which is incompatible. tensorflow-serving-api 2.7.0 requires tensorflow<3,>=2.7.0, but you have tensorflow 2.6.2 which is incompatible. flake8 4.0.1 requires importlib-metadata<4.3; python_version < "3.8", but you have importlib-metadata 4.11.3 which is incompatible. featuretools 1.6.0 requires numpy>=1.21.0, but you have numpy 1.20.3 which is incompatible. dask-cudf 21.10.1 requires dask==2021.09.1, but you have dask 2022.2.0 which is incompatible. dask-cudf 21.10.1 requires distributed==2021.09.1, but you have distributed 2022.2.0 which is incompatible. apache-beam 2.34.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.4 which is incompatible. apache-beam 2.34.0 requires httplib2<0.20.0,>=0.8, but you have httplib2 0.20.2 which is incompatible. apache-beam 2.34.0 requires pyarrow<6.0.0,>=0.15.1, but you have pyarrow 6.0.1 which is incompatible. aioitertools 0.10.0 requires typing_extensions>=4.0; python_version < "3.10", but you have typing-extensions 3.10.0.2 which is incompatible. aiobotocore 2.1.2 requires botocore<1.23.25,>=1.23.24, but you have botocore 1.24.20 which is incompatible.

How can I resolve this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.