Giter VIP home page Giter VIP logo

jeffheaton / t81_558_deep_learning Goto Github PK

View Code? Open in Web Editor NEW
5.7K 325.0 3.0K 243.31 MB

T81-558: Keras - Applications of Deep Neural Networks @Washington University in St. Louis

Home Page: https://sites.wustl.edu/jeffheaton/t81-558/

License: Other

Jupyter Notebook 99.93% Python 0.02% HTML 0.01% JavaScript 0.01% CSS 0.01% TeX 0.04%
neural-network machine-learning tensorflow keras deeplearning gan convolutional-neural-networks

t81_558_deep_learning's Introduction

Important Note

Current students of this course at Washington University should refer to the PyTorch version of this course, which is what is currently offered at the university. This repository contains the previous Keras/TensorFlow version.

T81 558:Applications of Deep Neural Networks - TensorFlow

Washington University in St. Louis

Instructor: Jeff Heaton

The content of this course changes as technology evolves, to keep up to date with changes follow me on GitHub.

  • Section 1. Spring 2023, Monday, 2:30 PM, Location: Eads / 216
  • Section 2. Spring 2023, Online

Course Description

Deep learning is a group of exciting new technologies for neural networks. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks that can handle tabular data, images, text, and audio as both input and output. Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain. This course will introduce the student to classic neural network structures, Convolution Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adversarial Networks (GAN) and reinforcement learning. Application of these architectures to computer vision, time series, security, natural language processing (NLP), and data generation will be covered. High Performance Computing (HPC) aspects will demonstrate how deep learning can be leveraged both on graphical processing units (GPUs), as well as grids. Focus is primarily upon the application of deep learning to problems, with some introduction to mathematical foundations. Students will use the Python programming language to implement deep learning using Google TensorFlow and Keras. It is not necessary to know Python prior to this course; however, familiarity of at least one programming language is assumed. This course will be delivered in a hybrid format that includes both classroom and online instruction.

Textbook

The complete text for this course is here on GitHub. This same material is also available in book format. The course textbook is “Applications of Deep Neural networks with Keras“, ISBN 9798416344269.

If you would like to cite the material from this course/book, please use the following BibTex citation:

@misc{heaton2020applications,
    title={Applications of Deep Neural Networks},
    author={Jeff Heaton},
    year={2020},
    eprint={2009.05673},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Objectives

  1. Explain how neural networks (deep and otherwise) compare to other machine learning models.
  2. Determine when a deep neural network would be a good choice for a particular problem.
  3. Demonstrate your understanding of the material through a final project uploaded to GitHub.

Syllabus

This syllabus presents the expected class schedule, due dates, and reading assignments. Download current syllabus.

Module Content
Module 1
Meet on 01/23/2023
Module 1: Python Preliminaries
  • Part 1.1: Course Overview
  • Part 1.2: Introduction to Python
  • Part 1.3: Python Lists, Dictionaries, Sets & JSON
  • Part 1.4: File Handling
  • Part 1.5: Functions, Lambdas, and Map/ReducePython Preliminaries
  • We will meet on campus this week! (first meeting)
Module 2
Week of 01/30/2023
Module 2: Python for Machine Learning
  • Part 2.1: Introduction to Pandas for Deep Learning
  • Part 2.2: Encoding Categorical Values in Pandas
  • Part 2.3: Grouping, Sorting, and Shuffling
  • Part 2.4: Using Apply and Map in Pandas
  • Part 2.5: Feature Engineering in Pandas
  • Module 1 Program due: 01/31/2023
  • Icebreaker due: 01/31/2023
Module 3
Week of 02/06/2023
Module 3: TensorFlow and Keras for Neural Networks
  • Part 3.1: Deep Learning and Neural Network Introduction
  • Part 3.2: Introduction to Tensorflow & Keras
  • Part 3.3: Saving and Loading a Keras Neural Network
  • Part 3.4: Early Stopping in Keras to Prevent Overfitting
  • Part 3.5: Extracting Keras Weights and Manual Neural Network Calculation
  • Module 2: Program due: 02/07/2023
Module 4
Week of 02/13/2023
Module 4: Training for Tabular Data
  • Part 4.1: Encoding a Feature Vector for Keras Deep Learning
  • Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC
  • Part 4.3: Keras Regression for Deep Neural Networks with RMSE
  • Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Training
  • Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch
  • Module 3 Program due: 02/14/2023
Module 5
Meet on 02/20/2023
Module 5: Regularization and Dropout
  • Part 5.1: Introduction to Regularization: Ridge and Lasso
  • Part 5.2: Using K-Fold Cross Validation with Keras
  • Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting
  • Part 5.4: Drop Out for Keras to Decrease Overfitting
  • Part 5.5: Bootstrapping and Benchmarking Hyperparameters
  • Module 4 Program due: 02/21/2023
  • We will meet on campus this week! (second meeting)
Module 6
Week of 02/27/2023
Module 6: CNN for Vision
    Part 6.1: Image Processing in Python
  • Part 6.2: Using Convolutional Networks with Keras
  • Part 6.3: Using Pretrained Neural Networks
  • Part 6.4: Looking at Keras Generators and Image Augmentation
  • Part 6.5: Recognizing Multiple Images with YOLOv5
  • Module 5 Program due: 02/28/2023
Module 7
Week of 03/06/2023
Module 7: Generative Adversarial Networks (GANs)
  • Part 7.1: Introduction to GANS for Image and Data Generation
  • Part 7.2: Train StyleGAN3 with your Own Images
  • Part 7.3: Exploring the StyleGAN Latent Vector
  • Part 7.4: GANS to Enhance Old Photographs Deoldify
  • Part 7.5: GANs for Tabular Synthetic Data Generation
  • Module 6 Assignment due: 03/07/2023
Module 8
Week of 03/20/2023
Module 8: Kaggle
  • Part 8.1: Introduction to Kaggle
  • Part 8.2: Building Ensembles with Scikit-Learn and Keras
  • Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters
  • Part 8.4: Bayesian Hyperparameter Optimization for Keras
  • Part 8.5: Current Semester's Kaggle
  • Module 7 Assignment due: 03/21/2023
Module 9
Meet on 03/27/2023
Module 9: Transfer Learning
  • Part 9.1: Introduction to Keras Transfer Learning
  • Part 9.2: Keras Transfer Learning for Computer Vision
  • Part 9.3: Transfer Learning for NLP with Keras
  • Part 9.4: Transfer Learning for Facial Feature Recognition
  • Part 9.5: Transfer Learning for Style Transfer
  • We will meet on campus this week! (third meeting)
  • Module 8 Assignment due: 03/28/2023
Module 10
Week of 04/03/2023
Module 10: Time Series in Keras
  • Part 10.1: Time Series Data Encoding for Deep Learning, Keras
  • Part 10.2: Programming LSTM with Keras and
  • Part 10.3: Text Generation with Keras
  • Part 10.4: Introduction to Transformers
  • Part 10.5: Transformers for Timeseries
  • Module 9 Assignment due: 04/04/2023
Module 11
Week of 04/10/2023
Module 11: Natural Language Processing
  • Part 11.1: Hugging Face Introduction
  • Part 11.2: Hugging Face Tokenizers
  • Part 11.3: Hugging Face Data Sets
  • Part 11.4: Training a Model in Hugging Face
  • Part 11.5: What are Embedding Layers in Keras
  • Module 10 Assignment due: 04/11/2023
Module 12
Week of 04/17/2023
Module 12: Reinforcement Learning
  • Kaggle Assignment due: 04/17/2023 (approx 4-6PM, due to Kaggle GMT timezone)
  • Part 12.1: Introduction to the OpenAI Gym
  • Part 12.2: Introduction to Q-Learning for Keras
  • Part 12.3: Keras Q-Learning in the OpenAI Gym
  • Part 12.4: Atari Games with Keras Neural Networks
  • Part 12.5: Application of Reinforcement Learning
Module 13
Meet on 04/24/2023
Module 13: Deployment and Monitoring
  • Part 13.1: Flask and Deep Learning Web Services
  • Part 13.2: Interrupting and Continuing Training
  • Part 13.3: Using a Keras Deep Neural Network with a Web Application
  • Part 13.4: When to Retrain Your Neural Network
  • Part 13.5: Tensor Processing Units (TPUs)
  • Final Project due 05/08/2023
  • We will meet on campus this week! (fourth meeting)

Datasets

t81_558_deep_learning's People

Contributors

abanoubn avatar aguadotzn avatar akramsystems avatar ambrosiussen avatar amolgm avatar andraseros avatar anindyamanna avatar anmoltomer avatar anongeorge avatar billyciam avatar blackwolf08 avatar blurrd avatar cadavidf avatar chiphip avatar chizi-p avatar doublelg avatar jeffheaton avatar jeffheaton-rga avatar joearrowsmith avatar lacossarnold avatar lucisu avatar lvnilesh avatar mhmoodlan avatar rayaun avatar rlzijdeman avatar sanyamswami123 avatar shishand1811 avatar shuvoxcd01 avatar tommyneeld avatar vaibhavthapli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

t81_558_deep_learning's Issues

Wrong year

Hello, I think the correct year is 2017 in your README.md for this class:

Class 9
03/27/2016 -> 03/27/2017

training on t81_558_class_07_2_Keras_gan.ipynb

Hi!

I tried running this in Colab, and it's just stuck on epoch 0. Here's what it shows:

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py:493: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ?
'Discrepancy between trainable weights and collected trainable'
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py:493: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ?
'Discrepancy between trainable weights and collected trainable'
Epoch 0, Discriminator accuarcy: 0.1875, Generator accuracy: 0.4375
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py:493: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ?
'Discrepancy between trainable weights and collected trainable'

Any help with this would be great. Thank you!

Error running in Colab


AttributeError Traceback (most recent call last)
in ()
10 import numpy as np
11
---> 12 import tensorflow as tf
13
14 from tf_agents.agents.ddpg import actor_network

3 frames
/usr/local/lib/python3.6/dist-packages/google/protobuf/any_pb2.py in ()
19 syntax='proto3',
20 serialized_options=b'\n\023com.google.protobufB\010AnyProtoP\001Z%github.com/golang/protobuf/ptypes/any\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes',
---> 21 create_key=_descriptor._internal_create_key,
22 serialized_pb=b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42o\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z%github.com/golang/protobuf/ptypes/any\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3'
23 )

AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'

Predict from Single Row of Dataset for Anomaly

May I know how can we predict from a single row of dataset for anomaly?

For e.g.
0,tcp,private,S0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,128,20,1.00,1.00,0.00,0.00,0.16,0.07,0.00,255,20,0.08,0.06,0.00,0.00,1.00,1.00,0.00,0.00,neptune.

Please consider this request on priority.

Don't understand the output

Sorry for asking a basic questions about I don't understand the answers. What is the meaning of score here means if an anomaly occurred how our model will react. a bit of explanation will be appreciated. Thanks for such a nice clean code.

NameError: name 'image_shape' is not defined

Hey,

when building the Generator and the Discriminator with the Adam optimizer I receive the following error:

NameError Traceback (most recent call last)

in ()
163 optimizer = Adam(1.5e-4,0.5) # learning rate and momentum adjusted from paper
164
--> 165 discriminator = build_discriminator(image_shape)
166 discriminator.compile(loss="binary_crossentropy",optimizer=optimizer,metrics=["accuracy"])
167 generator = build_generator(SEED_SIZE,IMAGE_CHANNELS)

NameError: name 'image_shape' is not defined

How could I solve this problem?

Error in running t81_558_class_10_4_captioning.ipynb

I am getting an error while training the model in Error in running t81_558_class_10_4_captioning.ipynb in this line

caption_model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)

It says

ValueError: could not broadcast input array from the shape (168,2048) into shape (168).

Not sure what I am doing wrong, everything works up to that line with the desired output.

How To Convert Fine Tuned Pre-Trained Keras Model To TF Estimator And Use On AWS Sagemaker

Thank you so much for your contributions. I am your biggest fan.

Please, can you share a tutorial on fine-tuning a pre-trained Karas model, convert to TF Estimator, then use on AWS Sagemaker?

I believe this suggestion will be of great help to the community.

I have researched and attempted on fine-tuning a pre-trained Keras model for use on AWS Sagemaker. However, I am stuck on using the custom estimator model (which has frozen weights) on AWS Sagemaker.

My Goal: Instead of reinventing the wheel, I intended to:

Fine tune a pre-trained Keras model like VGG16 by,
Freezing base layers which has already learned general features.
Adding/customizing last layers specific to the intended problem to solve. The customized last layers will be retrained on TFRecord data.
Compile the model with the necessary optimizer, loss, and metrics.
Convert the custom model into TF Estimator
o Reasons:
o Estimator base models can be run on localhost or distributed multi-server environment
o Can run on CPUs, GPUs, or TPUs
o Estimator models can be shared with other developers
o It can be used on AWS Sagemaker (here is where I am stuck)
From the code below, I created my custom TF Estimator by using Keras pre-trained model VGG16. I froze up to the 4th layer and customized the last layers to suit my intended problem. I compiled with the necessary optimizer, loss, and metrics then finally converted to TF estimator.

#Imported Libraries

from tensorflow.python.keras.applications.vgg16 import VGG16
from tensorflow.python.keras import models
from tensorflow.python.keras import layers
from tensorflow.python.keras import optimizers
from tensorflow.python.keras.preprocessing import image
import numpy as np
import tensorflow as tf
from tensorflow.python import keras
import numpy as np
import sys
from PIL import Image
import os
import shutil

tf.version

///////////////////////////////////////////////////////////////////

Loading the VGG model
img_size = (150,150,3)
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=image_size)

Freezing the layers except the last 4 layers
for layer in conv_base.layers[:-4]:
layer.trainable = False

Checking the trainable status of the individual layers
for layer in conv_base.layers:
print(layer, layer.trainable)

Creating the model.
model = models.Sequential()

Adding the conv_base base model
model.add(conv_base)

Adding the custom layers.
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation=’relu’))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(5, activation=’softmax’))

Show summary of my new model, check the trainable parameters.
model.summary()

#Compile with the necessary optimizer, loss, and metrics you'd like to train with.
model.compile(Adam(lr.0001),loss='categorical_crossentropy',metrics=['accuracy'])

Converting this custom model to TF Estimator for use in AWS Sagemaker

Convert the custom model to TF Estimator (est_model) and save to directory called model_dir.
model_dir = os.path.join(os.getcwd(), "models//catvsdog1").replace("//", "")
os.makedirs(model_dir, exist_ok=True)
print("model_dir: ",model_dir)
est_model = tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=model_dir)`

Your assistant will be hugely appreciated.

Error in manual installation manual_setup.ipynb

Hi @jeffheaton , thanks for your awesome resources for deep learning. I a new user in GitHub and also new comer for deep learning. Now I am starting to subscribe and follow your youtube channel. Really this would help me during my work.

I just followed your manual instruction for installation as in your GitHub manual_setup.ipynb. Unfortunately I have followed all the instruction as mentioned but I when I run the code in Jupiter:

# What version of Python do you have?
import sys

import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf

print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")


and the output are:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>
     57 
---> 58   from tensorflow.python.pywrap_tensorflow_internal import *
     59 

~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in <module>
     27             return _mod
---> 28     _pywrap_tensorflow_internal = swig_import_helper()
     29     del swig_import_helper

~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in swig_import_helper()
     23             try:
---> 24                 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
     25             finally:

C:\ProgramData\Miniconda3\envs\py36\lib\imp.py in load_module(name, file, filename, details)
    242         else:
--> 243             return load_dynamic(name, filename, file)
    244     elif type_ == PKG_DIRECTORY:

C:\ProgramData\Miniconda3\envs\py36\lib\imp.py in load_dynamic(name, path, file)
    342             name=name, loader=loader, origin=path)
--> 343         return _load(spec)
    344 

ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
<ipython-input-1-812fc96d3476> in <module>
      2 import sys
      3 
----> 4 import tensorflow.keras
      5 import pandas as pd
      6 import sklearn as sk

~\AppData\Roaming\Python\Python36\site-packages\tensorflow\__init__.py in <module>
     39 import sys as _sys
     40 
---> 41 from tensorflow.python.tools import module_util as _module_util
     42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
     43 

~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\__init__.py in <module>
     48 import numpy as np
     49 
---> 50 from tensorflow.python import pywrap_tensorflow
     51 
     52 # Protocol buffers

~\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>
     67 for some common reasons and solutions.  Include the entire stack trace
     68 above this error message when asking for help.""" % traceback.format_exc()
---> 69   raise ImportError(msg)
     70 
     71 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long

ImportError: Traceback (most recent call last):
  File "C:\Users\ASDUser\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "C:\Users\ASDUser\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "C:\Users\ASDUser\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "C:\ProgramData\Miniconda3\envs\py36\lib\imp.py", line 243, in load_module
    return load_dynamic(name, filename, file)
  File "C:\ProgramData\Miniconda3\envs\py36\lib\imp.py", line 343, in load_dynamic
    return _load(spec)
ImportError: DLL load failed: The specified module could not be found.


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.

For your info I uses windows 7, with miniconda 3 (64-bit) and Python version 3.6.10

Newer version of sklearn issues

Great lecture. Having issues with class 3 code in IBM DSWB. Keep getting this error below but if I change it to cross_validation, it works

ImportError: No module named 'sklearn.model_selection'

Thanks
Sunder

Ranking problem with DLL

I want to solve the ranking problem with DLL.

For example, I have 20 students(in random order) in a class and based on their profile (grades from the different curriculum, achievements, etc.) I want to create a rank list.

Basically I need a model that will get 20-row x 10 features in every step and make output label for every student.

Unable to download the tensorflow

`
(base) C:\Users\admin>conda env create -v -f tensorflow.yml
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done

==> WARNING: A newer version of conda exists. <==
current version: 4.8.2
latest version: 4.8.3

Please update conda by running

$ conda update -n base -c defaults conda

initializing UnlinkLinkTransaction with
target_prefix: C:\Users\admin\miniconda3\envs\tensorflow
unlink_precs:

link_precs:
defaults/win-64::_tflow_select-2.2.0-eigen
defaults/win-64::blas-1.0-mkl
defaults/win-64::ca-certificates-2020.1.1-0
defaults/win-64::icc_rt-2019.0.0-h0cc432a_1
defaults/win-64::intel-openmp-2020.0-166
defaults/win-64::msys2-conda-epoch-20160418-1
defaults/win-64::pandoc-2.2.3.2-0
defaults/win-64::vs2015_runtime-14.16.27012-hf0eaf9b_1
defaults/win-64::winpty-0.4.3-4
defaults/win-64::m2w64-gmp-6.1.0-2
defaults/win-64::m2w64-libwinpthread-git-5.0.0.4634.697f757-2
defaults/win-64::mkl-2020.0-166
defaults/win-64::vc-14.1-h0510ff6_4
defaults/win-64::icu-58.2-ha66f8fd_1
defaults/win-64::jpeg-9b-hb83a4c4_2
defaults/win-64::libiconv-1.15-h1df5818_7
defaults/win-64::libsodium-1.0.16-h9d3ae62_0
defaults/win-64::m2w64-gcc-libs-core-5.3.0-7
defaults/win-64::openssl-1.1.1f-he774522_0
defaults/win-64::sqlite-3.31.1-he774522_0
defaults/win-64::tk-8.6.8-hfa6e2cd_0
defaults/win-64::xz-5.2.5-h62dcd97_0
defaults/win-64::yaml-0.1.7-hc54c509_2
defaults/win-64::zlib-1.2.11-h62dcd97_4
defaults/win-64::hdf5-1.10.4-h7ebc959_0
defaults/win-64::libpng-1.6.37-h2a8f88b_0
defaults/win-64::libprotobuf-3.11.4-h7bd577a_0
defaults/win-64::libxml2-2.9.9-h464c3ec_0
defaults/win-64::m2w64-gcc-libgfortran-5.3.0-6
defaults/win-64::python-3.7.7-h60c2a47_2
defaults/win-64::zeromq-4.3.1-h33f27b4_3
defaults/win-64::zstd-1.3.7-h508b16e_0
defaults/win-64::asn1crypto-1.3.0-py37_0
defaults/win-64::astor-0.8.0-py37_0
defaults/noarch::attrs-19.3.0-py_0
defaults/win-64::backcall-0.1.0-py37_0
defaults/win-64::blinker-1.4-py37_0
defaults/noarch::cachetools-3.1.1-py_0
defaults/win-64::certifi-2020.4.5.1-py37_0
defaults/win-64::chardet-3.0.4-py37_1003
defaults/noarch::click-7.1.1-py_0
defaults/noarch::colorama-0.4.3-py_0
defaults/noarch::decorator-4.4.2-py_0
defaults/noarch::defusedxml-0.6.0-py_0
defaults/win-64::docutils-0.15.2-py37_0
defaults/win-64::entrypoints-0.3-py37_0
defaults/win-64::freetype-2.9.1-ha9979f8_1
defaults/win-64::gast-0.2.2-py37_0
defaults/noarch::idna-2.9-py_1
defaults/win-64::ipython_genutils-0.2.0-py37_0
defaults/win-64::itsdangerous-1.1.0-py37_0
defaults/noarch::jmespath-0.9.4-py_0
defaults/win-64::kiwisolver-1.1.0-py37ha925a31_0
defaults/win-64::libtiff-4.1.0-h56a325e_0
defaults/win-64::libxslt-1.1.33-h579f668_0
defaults/win-64::m2w64-gcc-libs-5.3.0-7
defaults/win-64::markupsafe-1.1.1-py37he774522_0
defaults/win-64::mistune-0.8.4-py37he774522_0
defaults/win-64::olefile-0.46-py37_0
defaults/win-64::pandocfilters-1.4.2-py37_1
defaults/noarch::parso-0.6.2-py_0
defaults/win-64::pickleshare-0.7.5-py37_0
defaults/noarch::prometheus_client-0.7.1-py_0
defaults/noarch::pyasn1-0.4.8-py_0
defaults/noarch::pycparser-2.20-py_0
defaults/noarch::pyparsing-2.4.6-py_0
defaults/win-64::pyreadline-2.1-py37_1
defaults/noarch::pytz-2019.3-py_0
defaults/win-64::pywin32-227-py37he774522_1
defaults/win-64::pyyaml-5.3.1-py37he774522_0
defaults/win-64::pyzmq-18.1.1-py37ha925a31_0
defaults/win-64::qt-5.9.7-vc14h73c81de_0
defaults/noarch::qtpy-1.9.0-py_0
defaults/win-64::send2trash-1.5.0-py37_0
defaults/win-64::sip-4.19.8-py37h6538335_0
defaults/win-64::six-1.14.0-py37_0
defaults/win-64::termcolor-1.1.0-py37_1
defaults/noarch::testpath-0.4.4-py_0
defaults/win-64::tornado-6.0.4-py37he774522_1
defaults/noarch::tqdm-4.44.1-py_0
defaults/noarch::wcwidth-0.1.9-py_0
defaults/win-64::webencodings-0.5.1-py37_1
defaults/noarch::werkzeug-0.16.1-py_0
defaults/win-64::win_inet_pton-1.1.0-py37_0
defaults/win-64::wincertstore-0.2-py37_0
defaults/win-64::wrapt-1.12.1-py37he774522_1
defaults/noarch::zipp-2.2.0-py_0
defaults/win-64::absl-py-0.9.0-py37_0
defaults/win-64::cffi-1.14.0-py37h7a1dbc1_0
defaults/win-64::cycler-0.10.0-py37_0
defaults/noarch::google-pasta-0.2.0-py_0
defaults/win-64::importlib_metadata-1.5.0-py37_0
defaults/win-64::jedi-0.16.0-py37_1
defaults/win-64::lxml-4.5.0-py37h1350720_0
defaults/win-64::mkl-service-2.3.0-py37hb782905_0
defaults/win-64::pillow-7.0.0-py37hcc1f983_0
defaults/noarch::pyasn1-modules-0.2.7-py_0
defaults/win-64::pyqt-5.9.2-py37h6538335_2
defaults/win-64::pyrsistent-0.16.0-py37he774522_0
defaults/win-64::pysocks-1.7.1-py37_0
defaults/noarch::python-dateutil-2.8.1-py_0
defaults/win-64::pywinpty-0.5.7-py37_0
defaults/noarch::rsa-4.0-py_0
defaults/win-64::setuptools-46.1.3-py37_0
defaults/win-64::traitlets-4.3.3-py37_0
defaults/noarch::bleach-3.1.4-py_0
defaults/win-64::cryptography-2.8-py37h7a1dbc1_0
defaults/noarch::google-auth-1.13.1-py_0
defaults/win-64::grpcio-1.27.2-py37h351948d_0
defaults/noarch::jinja2-2.11.1-py_0
defaults/noarch::joblib-0.14.1-py_0
defaults/win-64::jsonschema-3.2.0-py37_0
defaults/win-64::jupyter_core-4.6.3-py37_0
defaults/win-64::markdown-3.1.1-py37_0
defaults/win-64::numpy-base-1.18.1-py37hc3f5095_1
defaults/win-64::protobuf-3.11.4-py37h33f27b4_0
defaults/noarch::pygments-2.6.1-py_0
defaults/win-64::terminado-0.8.3-py37_0
defaults/win-64::wheel-0.34.2-py37_0
defaults/noarch::flask-1.1.2-py_0
defaults/noarch::jupyter_client-6.1.2-py_0
defaults/noarch::nbformat-5.0.4-py_0
defaults/win-64::pip-20.0.2-py37_1
defaults/noarch::prompt-toolkit-3.0.4-py_0
defaults/win-64::pyjwt-1.7.1-py37_0
defaults/win-64::pyopenssl-19.1.0-py37_0
defaults/win-64::nbconvert-5.6.1-py37_0
defaults/noarch::oauthlib-3.1.0-py_0
defaults/noarch::prompt_toolkit-3.0.4-0
defaults/win-64::urllib3-1.25.8-py37_0
defaults/noarch::botocore-1.15.39-py_0
defaults/win-64::ipython-7.13.0-py37h5ca1d4c_0
defaults/win-64::requests-2.23.0-py37_0
defaults/win-64::ipykernel-5.1.4-py37h39e3cac_0
defaults/noarch::requests-oauthlib-1.3.0-py_0
defaults/win-64::s3transfer-0.3.3-py37_0
defaults/noarch::boto3-1.12.39-py_0
defaults/noarch::google-auth-oauthlib-0.4.1-py_2
defaults/noarch::jupyter_console-6.1.0-py_0
defaults/win-64::notebook-6.0.3-py37_0
defaults/noarch::qtconsole-4.7.2-py_0
defaults/win-64::widgetsnbextension-3.5.1-py37_0
defaults/noarch::ipywidgets-7.5.1-py_0
defaults/win-64::jupyter-1.0.0-py37_7
defaults/noarch::opt_einsum-3.1.0-py_0
defaults/noarch::pandas-datareader-0.8.1-py_0
defaults/noarch::tensorboard-2.1.0-py3_0
defaults/noarch::tensorflow-estimator-2.0.0-pyh2649769_0
defaults/win-64::h5py-2.10.0-py37h5e291fa_0
defaults/noarch::keras-applications-1.0.8-py_0
defaults/win-64::matplotlib-3.1.3-py37_0
defaults/win-64::matplotlib-base-3.1.3-py37h64f37c6_0
defaults/win-64::mkl_fft-1.0.15-py37h14836fe_0
defaults/win-64::mkl_random-1.1.0-py37h675688f_0
defaults/win-64::numpy-1.18.1-py37h93ca92e_0
defaults/win-64::pandas-1.0.3-py37h47e9c7a_0
defaults/win-64::scipy-1.4.1-py37h9439919_0
defaults/noarch::keras-preprocessing-1.1.0-py_1
defaults/win-64::scikit-learn-0.22.1-py37h6288b17_0
defaults/win-64::tensorflow-base-2.0.0-eigen_py37h01553b8_0
defaults/win-64::tensorflow-2.0.0-eigen_py37hbfc5123_0

Preparing transaction: ...working... done
Verifying transaction: ...working... failed
Traceback (most recent call last):
File "C:\Users\admin\miniconda3\lib\site-packages\conda\exceptions.py", line 1079, in call
return func(*args, **kwargs)
File "C:\Users\admin\miniconda3\lib\site-packages\conda_env\cli\main.py", line 80, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\admin\miniconda3\lib\site-packages\conda_env\cli\main_create.py", line 111, in execute
result[installer_type] = installer.install(prefix, pkg_specs, args, env)
File "C:\Users\admin\miniconda3\lib\site-packages\conda_env\installers\conda.py", line 40, in install
unlink_link_transaction.execute()
File "C:\Users\admin\miniconda3\lib\site-packages\conda\core\link.py", line 244, in execute
self.verify()
File "C:\Users\admin\miniconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\Users\admin\miniconda3\lib\site-packages\conda\core\link.py", line 234, in verify
maybe_raise(CondaMultiError(exceptions), context)
File "C:\Users\admin\miniconda3\lib\site-packages\conda\exceptions.py", line 1019, in maybe_raise
raise error
conda.CondaMultiError: The package for hdf5 located at C:\Users\admin\miniconda3\pkgs\hdf5-1.10.4-h7ebc959_0
appears to be corrupted. The path 'Library/bin/h5unjam.exe'
specified in the package manifest cannot be found.

The package for qt located at C:\Users\admin\miniconda3\pkgs\qt-5.9.7-vc14h73c81de_0
appears to be corrupted. The path 'Library/bin/assistant.exe'
specified in the package manifest cannot be found.

The package for qt located at C:\Users\admin\miniconda3\pkgs\qt-5.9.7-vc14h73c81de_0
appears to be corrupted. The path 'Library/bin/designer.exe'
specified in the package manifest cannot be found.

The package for qt located at C:\Users\admin\miniconda3\pkgs\qt-5.9.7-vc14h73c81de_0
appears to be corrupted. The path 'Library/bin/qmlcachegen.exe'
specified in the package manifest cannot be found.

The package for qt located at C:\Users\admin\miniconda3\pkgs\qt-5.9.7-vc14h73c81de_0
appears to be corrupted. The path 'Library/bin/qmlplugindump.exe'
specified in the package manifest cannot be found.

This transaction has incompatible packages due to a shared path.
packages: defaults/win-64::notebook-6.0.3-py37_0, defaults/win-64::notebook-6.0.3-py37_0
path: 'menu/notebook.json'

Screenshot (52)
`

Rework GANs to be 2.0 compatable

Need to rework some of the GAN code to deal with the fact that keras-rl2 seems to be abandoned. I will likely just use my own simple implementation of GANs (once I finish it).

Error while invoking endpoint

support

Hi!

While calling the model endpoint,I'm encountering the error mentioned below.I'm using keras/tensorflow model.I have used a custom code to train and deploy the model.I tried to use the cloudwatch link provided in the error but couldn't trace the error.

Any help will be appreciated.

Code

data = train_df.iloc[0,:186].values.tolist()

response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data))

response_body = response['Body']

print(response_body.read())

Error

ModelError Traceback (most recent call last)

in ()

  1 data = train_df.iloc[0,:186].values

----> 2 response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data.tolist()))

  3 response_body = response['Body']

  4 print(response_body.read())

~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)

314                     "%s() only accepts keyword arguments." % py_operation_name)

315             # The "self" in this scope is referring to the BaseClient.

--> 316 return self._make_api_call(operation_name, kwargs)

317

318         api_call.name_ = str(py_operation_name)

~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)

624             error_code = parsed_response.get("Error", {}).get("Code")

625             error_class = self.exceptions.from_code(error_code)

--> 626 raise error_class(parsed_response, operation_name)

627         else:

628             return parsed_response

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model with message "". See https://us-west-1.console.aws.amazon.com/cloudwatch/home?region=us-west-1#logEventViewer:group=/aws/sagemaker/Endpoints/sagemaker-tensorflow-2020-03-28-17-26-02-111 in account 805885464583 for more information.

Simple TensorFlow LSTM Example

On the example -- listed as cell number 9.

max_features = 4 # 0,1,2,3 (total of 4)
x = [
    [[0],[1],[1],[0],[0],[0]],
    [[0],[0],[0],[2],[2],[0]],
    [[0],[0],[0],[0],[3],[3]],
    [[0],[2],[2],[0],[0],[0]],
    [[0],[0],[3],[3],[0],[0]],
    [[0],[0],[0],[0],[1],[1]]
]
x = np.array(x,dtype=np.float32)
y = np.array([1,2,3,2,3,1],dtype=np.int32)

can x be represented as two 'block/arrays'
one being the length in the "window" and second being a one-hot encoded array of color.

x2 = [
    [[0],[1],[1],[0],[0],[0]],
    [[0],[0],[0],[1],[1],[0]],
    [[0],[0],[0],[0],[1],[1]],
    [[0],[1],[1],[0],[0],[0]],
    [[0],[0],[1],[1],[0],[0]],
    [[0],[0],[0],[0],[1],[1]]
]
x3 = [
    [[1],[0],[0]],
    [[0],[1],[0]],
    [[0],[0],[1]],
    [[0],[1],[0]],
    [[0],[0],[1]],
    [[1],[0],[0]]
]

This thought comes form using with the blank data array [[0],[0],[0],[0],[0],[0]], the one decision is where to start the sequence, the second is what color. But, combining the two array, give the incorrect length in the starting array of seven elements.

I can be reached at newbox dot me at gmail dot com. thanks

Running Automl Error

is .lower() method supposed to be for column names or the entire dataframe

AttributeError: 'DataFrame' object has no attribute 'lower'

IsADirectoryError: [Errno 21] Is a directory:

training_binary_path = os.path.join(DATA_PATH,f'training_data_{GENERATE_SQUARE}_{GENERATE_SQUARE}.npy')

print(f"Looking for file: {training_binary_path}")

if not os.path.isfile(training_binary_path):
print("Loading training images...")

training_data = []
faces_path = os.path.join(DATA_PATH,'lfw')
for filename in tqdm(os.listdir(faces_path)):
path = os.path.join(faces_path, filename)
image = Image.open(path).resize((GENERATE_SQUARE,GENERATE_SQUARE),Image.ANTIALIAS)
training_data.append(np.asarray(image))
training_data = np.reshape(training_data,(-1,GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS))
training_data = training_data / 127.5 - 1.

print("Saving training image binary...")
np.save(training_binary_path,training_data)
else:
print("Loading previous training pickle...")
training_data = np.load(training_binary_path)
IsADirectoryError Traceback (most recent call last)
in
14 for filename in tqdm(os.listdir(faces_path)):
15 path = os.path.join(faces_path, filename)
---> 16 image = Image.open(path).resize((GENERATE_SQUARE,GENERATE_SQUARE),Image.ANTIALIAS)
17 training_data.append(np.asarray(image))
18 training_data = np.reshape(training_data,(-1,GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS))

~/anaconda3/envs/tf/lib/python3.6/site-packages/PIL/Image.py in open(fp, mode)
2768
2769 if filename:
-> 2770 fp = builtins.open(filename, "rb")
2771 exclusive_fp = True
2772

IsADirectoryError: [Errno 21] Is a directory: '/home/mihuzz/PycharmProjects/dcgan/lfw/Pham_Thi_Mai_Phuong'

Keras not found. Runs fine the first time.

I am new to tensorflow and jupyter.
I ran through your video and installed everything perfectly. The code (01 python intro file) ran the first time pretty fine. I added a new print statement with a "hello" text, and it started giving me this error:
ModuleNotFoundError Traceback (most recent call last)
in
2 import sys
3
----> 4 import keras
5 import pandas as pd
6 import sklearn as sk

ModuleNotFoundError: No module named 'keras'

I re cloned the repo from git and ran jupyter notebook again, the same error occurred again. And now it's not working. Please help.

titles of notebooks..

Can you please rename notebooks
from:
t81_558_class1_intro_python.ipynb
to:
t81_558_class01_intro_python.ipynb

easier to follow... watched repo three years! best repo on topic!

shape missmatch

Hi, i am trying to follow along your tutorial on youtube and i am getting an error, I am not an expert at this can you please help, Thank you

Epoch 1/5


ValueError Traceback (most recent call last)
in
4 model.fit_generator(generator=train_generator,
5 steps_per_epoch=step_size_train,
----> 6 epochs=5)

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1295 shuffle=shuffle,
1296 initial_epoch=initial_epoch,
-> 1297 steps_name='steps_per_epoch')
1298
1299 def evaluate_generator(self,

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training_generator.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
263
264 is_deferred = not model._is_compiled
--> 265 batch_outs = batch_function(*batch_data)
266 if not isinstance(batch_outs, list):
267 batch_outs = [batch_outs]

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training.py in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics)
971 outputs = training_v2_utils.train_on_batch(
972 self, x, y=y, sample_weight=sample_weight,
--> 973 class_weight=class_weight, reset_metrics=reset_metrics)
974 outputs = (outputs['total_loss'] + outputs['output_losses'] +
975 outputs['metrics'])

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics)
251 x, y, sample_weights = model._standardize_user_data(
252 x, y, sample_weight=sample_weight, class_weight=class_weight,
--> 253 extract_tensors_from_dataset=True)
254 batch_size = array_ops.shape(nest.flatten(x, expand_composites=True)[0])[0]
255 # If model._distribution_strategy is True, then we are in a replica context

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2536 # Additional checks to avoid users mistakenly using improper loss fns.
2537 training_utils.check_loss_and_target_compatibility(
-> 2538 y, self._feed_loss_fns, feed_output_shapes)
2539
2540 # If sample weight mode has not been set and weights are None for all the

E:\Anaconda\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
741 raise ValueError('A target array with shape ' + str(y.shape) +
742 ' was passed for an output of shape ' + str(shape) +
--> 743 ' while using as loss ' + loss_name + '. '
744 'This loss expects targets to have the same shape '
745 'as the output.')

ValueError: A target array with shape (1, 4) was passed for an output of shape (None, 3) while using as loss categorical_crossentropy. This loss expects targets to have the same shape as the output.

An existing connection was forcibly closed by the remote host

After running this command got error

pip install --upgrade pandas

WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))': /packages/d0/4e/9db3468e504ac9aeadb37eb32bcf0a74d063d24ad1471104bd8a7ba20c97/pandas-0.24.2-cp36-cp36m-win_amd64.whl

ERROR: Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/d0/4e/9db3468e504ac9aeadb37eb32bcf0a74d063d24ad1471104bd8a7ba20c97/pandas-0.24.2-cp36-cp36m-win_amd64.whl (Caused by ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)))

How to get the inverse of this function "encode_text_index (df, name)"?

Hello sir,
How to get the opposite of this function "encode_text_index (df, name)", I mean how to know that 0 is the class normal and 1 is a DoS for example? because the result only displays the code of each class (0, 1, 2, 3, 4) I want to know the name of the class, how to do it?
Thank you.

error while installing "keras-rl2"

Following error results from executing the list of pip install dependencies:

ERROR: tensorflow 2.0.0b1 has requirement tb-nightly<1.14.0a20190604,>=1.14.0a20190603, but you'll have tb-nightly 1.15.0a20190720 which is incompatible.

installing instead "keras-2.2.4", which was the case of previous committ, solves the error. However, will that break things later in the tutorial?

Part# 6.4 square chopping

Thank for the great notebook to learn ,especially during the cov-19 homestay.
I don't know if mis-understood but the function below seems to make the image rectangle instead of square.
Do you mean .eg. crop((pad,0,pad+cols,cols) like in the previous course?

def make_square(img):
cols,rows = img.size

if rows>cols:
    pad = (rows-cols)/2
    img = img.crop((pad,0,cols,cols))
else:
    pad = (cols-rows)/2
    img = img.crop((0,pad,rows,rows))  

"Error" in file t81_558_class04_training.ipynb

Hi, in the code cell for "Training with a Validation Set and Early Stopping" the dataset is split into training and validation set but the model is fitted with x,y instead of x_train, x_test

# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(    
    x, y, test_size=0.25, random_state=42)
...
model.fit(x, y, validation_data=(x_test, y_test), callbacks=[monitor], verbose=2, epochs=1000)

i am facing error while installing gym

using
pip install --exists-action i --upgrade gym

error have
ERROR: spyder 4.1.3 requires pyqt5<5.13; python_version >= "3", which is not installed.
ERROR: spyder 4.1.3 requires pyqtwebengine<5.13; python_version >= "3", which is not installed.

Miniconda env MacOS issue

Error

Screen Shot 2020-06-03 at 12 56 06 AM

Files tree

I have a directory by the name of Miniconda3 in the user directory. I have uploaded an image below:

Screen Shot 2020-06-03 at 1 00 29 AM

MacOS version

macOS Catalina 10.15.5

Problem with class 12 - Atari

Hi,
Thank you for the course. I tried to run the Atari example on Google's Colab, however there seems to be an issue. I have restarted the session as mentioned in the lecture video, but I still get an error.

The problem arises with the Agent section of the Jupiter Notebook, the box which starts with defining the optimiser.

optimizer = tf.compat.v1.train.RMSPropOptimizer(

The issue happens with the last part of the box, I have copied and pasted the error below:

ValueError Traceback (most recent call last)

in ()
33 debug_summaries=False,
34 summarize_grads_and_vars=False,
---> 35 train_step_counter=_global_step)
36
37

11 frames

/usr/local/lib/python3.6/dist-packages/tf_agents/utils/nest_utils.py in assert_matching_dtypes_and_inner_shapes(tensors, specs, caller, tensors_name, specs_name, allow_extra_fields)
334 get_dtypes(specs),
335 get_shapes(tensors),
--> 336 get_shapes(specs)))
337
338

ValueError: <tf_agents.networks.encoding_network.EncodingNetwork object at 0x7f902e3a8630>: Inconsistent dtypes or shapes between inputs and input_tensor_spec.
dtypes:
<dtype: 'float32'>
vs.
<dtype: 'uint8'>.
shapes:
(1, 84, 84, 4)
vs.
(84, 84, 4).
In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.DqnAgent'>)

I hope this is enough information to resolve the issue. Thanks.

typo found in `t81_558_class_14_03_anomaly.ipynb`

First of all thank you for posting notebooks. It's nice concise way for me to test out a new concept :)

The notebook t81_558_class_14_03_anomaly.ipynb has typos in the last cell

score1 = np.sqrt(metrics.mean_squared_error(pred,x_normal_test))
print(f"Insample Normal Score (RMSE): {score1}".format(score1))
# score is the test set
# score2 is the whole dataset (- attacks) 

Only the 2nd and 3rd to last print statements need to be changed

Error in running t81_558_class_10_4_captioning.ipynb

Problem

When training the NN it throws this error

ValueError Traceback (most recent call last)
in ()
3 for i in tqdm(range(EPOCHS*2)):
4 generator = data_generator(train_descriptions, encoding_train, wordtoidx, max_length, number_pics_per_bath)
----> 5 caption_model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
6
7 caption_model.optimizer.lr = 1e-4

2 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_generator.py in _get_next_batch(generator, mode)
370 raise ValueError('Output of generator should be '
371 'a tuple (x, y, sample_weight) '
--> 372 'or (x, y). Found: ' + str(generator_output))
373
374 if len(generator_output) < 1 or len(generator_output) > 3:

ValueError: Output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: [[array([[0.1227757 , 0.33294907, 0.7527157 , ..., 0.21939711, 0.30216414,
0.40283215],
[0.1227757 , 0.33294907, 0.7527157 , ..., 0.21939711, 0.30216414,
0.40283215],
[0.1227757 , 0.33294907, 0.7527157 , ..., 0.21939711, 0.30216414,
0.40283215],
...,
[0.37351927, 0.24596639, 0.96352935, ..., 1.1459347 , 0.26539996,
0.01983135],
[0.37351927, 0.24596639, 0.96352935, ..., 1.1459347 , 0.26539996,
0.01983135],
[0.37351927, 0.24596639, 0.96352935, ..., 1.1459347 , 0.26539996,
0.01983135]], dtype=float32), array([[ 0, 0, 0, ..., 0, 0, 1],
[ 0, 0, 0, ..., 0, 1, 2],
[ 0, 0, 0, ..., 1, 2, 3],
...,
[ 0, 0, 0, ..., 66, 68, 3],
[ 0, 0, 0, ..., 68, 3, 21],
[ 0, 0, 0, ..., 3, 21, 61]], dtype=int32)], array([[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], dtype=float32)]

Solution

Tensor flow required a tuple to be returned

Typo in notebook t81_558_class09_regularization.ipynb

Hi Jeff, in notebook t81_558_class09_regularization.ipynb, chapter "TensorFlow and L1/L2", the last sentence above the "L1 vs L2" graph, "L1 will force the weights into a pattern similar to a Gaussian distribution; the L2 will force the weights into a pattern similar to a Laplace distribution, as demonstrated the following:". It is actually vice-versa and correctly labeled in the graph.

Value Error while broadcasting

Expected Output : Saving the training dataset
Traceback : Traceback (most recent call last): File "loadimages.py", line 28, in <module> training_ds = np.reshape(training_ds,(-1,GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS)) File "<__array_function__ internals>", line 5, in reshape File "/home/chr0m0s0m3s/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 299, in reshape return _wrapfunc(a, 'reshape', newshape, order=order) File "/home/chr0m0s0m3s/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 55, in _wrapfunc return _wrapit(obj, method, *args, **kwds) File "/home/chr0m0s0m3s/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 44, in _wrapit result = getattr(asarray(obj), method)(*args, **kwds) File "/home/chr0m0s0m3s/.local/lib/python3.8/site-packages/numpy/core/_asarray.py", line 83, in asarray return array(a, dtype, copy=False, order=order) ValueError: could not broadcast input array from shape (480,480,3) into shape (480,480)

Typo in t81_558_class_02_4_pandas_functional.ipynb

There is a typo in 4 cell:

efficiency = df.apply(lambda x: x['displacement']/x['horsepower'], axis=1)
display(effi[0:10])

should be:

efficiency = df.apply(lambda x: x['displacement']/x['horsepower'], axis=1)
display(efficiency[0:10])

07_2_Keras_gan Input shape mismatch

The code works fine when GENERATE_RES = 2. But when GENERATE_RES = 3, the generator outputs an image of (128, 128, 3), whereas the discriminator should receive an (96, 96, 3) image.
Here is an summary of generator when GENERATE_RES = 3:
Layer (type) Output Shape Param #

dense_2 (Dense) (None, 4096) 413696

reshape_1 (Reshape) (None, 4, 4, 256) 0

up_sampling2d_1 (UpSampling2 (None, 8, 8, 256) 0

conv2d_6 (Conv2D) (None, 8, 8, 256) 590080

batch_normalization_5 (Batch (None, 8, 8, 256) 1024

activation_1 (Activation) (None, 8, 8, 256) 0

up_sampling2d_2 (UpSampling2 (None, 16, 16, 256) 0

conv2d_7 (Conv2D) (None, 16, 16, 256) 590080

batch_normalization_6 (Batch (None, 16, 16, 256) 1024

activation_2 (Activation) (None, 16, 16, 256) 0

up_sampling2d_3 (UpSampling2 (None, 32, 32, 256) 0

conv2d_8 (Conv2D) (None, 32, 32, 128) 295040

batch_normalization_7 (Batch (None, 32, 32, 128) 512

activation_3 (Activation) (None, 32, 32, 128) 0

up_sampling2d_4 (UpSampling2 (None, 64, 64, 128) 0

conv2d_9 (Conv2D) (None, 64, 64, 128) 147584

batch_normalization_8 (Batch (None, 64, 64, 128) 512

activation_4 (Activation) (None, 64, 64, 128) 0

up_sampling2d_5 (UpSampling2 (None, 128, 128, 128) 0

conv2d_10 (Conv2D) (None, 128, 128, 128) 147584

batch_normalization_9 (Batch (None, 128, 128, 128) 512

activation_5 (Activation) (None, 128, 128, 128) 0

conv2d_11 (Conv2D) (None, 128, 128, 3) 3459

activation_6 (Activation) (None, 128, 128, 3) 0


Also, the summary of generator when GENERATE_RES = 2:

Layer (type) Output Shape Param #

dense_2 (Dense) (None, 4096) 413696

reshape_1 (Reshape) (None, 4, 4, 256) 0

up_sampling2d_1 (UpSampling2 (None, 8, 8, 256) 0

conv2d_6 (Conv2D) (None, 8, 8, 256) 590080

batch_normalization_5 (Batch (None, 8, 8, 256) 1024

activation_1 (Activation) (None, 8, 8, 256) 0

up_sampling2d_2 (UpSampling2 (None, 16, 16, 256) 0

conv2d_7 (Conv2D) (None, 16, 16, 256) 590080

batch_normalization_6 (Batch (None, 16, 16, 256) 1024

activation_2 (Activation) (None, 16, 16, 256) 0

up_sampling2d_3 (UpSampling2 (None, 32, 32, 256) 0

conv2d_8 (Conv2D) (None, 32, 32, 128) 295040

batch_normalization_7 (Batch (None, 32, 32, 128) 512

activation_3 (Activation) (None, 32, 32, 128) 0

up_sampling2d_4 (UpSampling2 (None, 64, 64, 128) 0

conv2d_9 (Conv2D) (None, 64, 64, 128) 147584

batch_normalization_8 (Batch (None, 64, 64, 128) 512

activation_4 (Activation) (None, 64, 64, 128) 0

conv2d_10 (Conv2D) (None, 64, 64, 3) 3459

activation_5 (Activation) (None, 64, 64, 3) 0

GAN Resolution

This was submitted to me in email, adding here so I do not lose track of the request.

I did encounter a problem when trying to run the GAN through COLAB with an image res of 96 or 128.

The problem starts when I try to start the training.
If the Image_res is set to 3, we will generate training data with a resolution of 96, however, the training block at the bottom gives an error, as it expects data with a resolution of 128.
The same happens with image_res set to 4, generate data of 128 res, training expects 256.
If I set image_res to 1 or 2, there are no problems, the expected resolutions of 32 and 64 respectively are returned in the training block.
I think that the training block uses a different formula for the expected data, based on the image_res constant we provide, then the resolution we define (generate_square).

The current definition for our image resolution is 32 * image_res. I believe the training block uses the following definition 32 * 2^(image_res-1).
This formula has the same result as our current definition for image_res 1 and 2, and outputs a resolution of 128 and 256 for an image_res of 3 and 4 respectively.

When I tried altering the generate_square formula from 32 * image_Res to 32 * ( 2 ** (Image_res - 1 ) ) it resolved the issue and allowed me to run the training block for images with a 128 resolution, with image_res set to 3.

As I have no knowledge of the source code or the programming language, I'm not 100% that this is the correct solution.
I just wanted to inform you about this issue.

Thank you again for the fantastic lectures on youtube and the availability of the lessons on Github.
Kind Regards.

Would it possible to extend the logloss example to include a multi-class classification?

The videos and the code are all wonderful. Thank you so much for taking so much time and trouble.

I was wondering if it would be possible to do a hand example for the following calculations for a multi-class classification. I can't figure out by hand and get the same calculations as:

from tensorflow.python.keras.utils import losses_utils
cce = tf.keras.losses.CategoricalCrossentropy(reduction=losses_utils.ReductionV2.AUTO)
truth = tf.constant([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
predictions = tf.constant([[.9, .05, .05], [.05, .89, .06], [.05, .01, .94]])
print('truth', truth)
print('predictions', predictions)
loss = cce(truth, predictions)
print('CategorialCrossenthropy Loss: ', loss.numpy()) # Loss: 0.0945

what is equivalent hand calculation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.