Giter VIP home page Giter VIP logo

ccm-site's Introduction

Instructors: Brenden Lake and Todd Gureckis

Teaching Assistants: Reuben Feinman and Anselm Rothe

Meeting time and location:
Lecture
Mondays 1:50-3:30 PM
*ROOM CHANGE*: Silver Center for Arts & Science, 100 Washington Sq East, Room 520

Lab
Tuesdays 2:40-3:30 PM
60 Fifth Ave. Room 110

Course numbers:
DS-GA 3001.005 (Data Science)
PSYCH-GA 3405.002 (Psychology)

Contact information and Piazza:
We use Piazza for questions and class discussion. Piazza gets you help efficiently from classmates, the TA, and the instructors. Rather than emailing questions to the teaching staff, please post your questions on Piazza.

The signup link for our Piazza page is available here (https://piazza.com/nyu/spring2019/dsga3001005).

Once signed up, our class Piazza page is available here (https://piazza.com/nyu/spring2019/dsga3001005/home).

If there is a need to email the teaching staff directly, please use the following email address: [email protected]

Office hours:
Todd Gureckis
[email protected] (Thursdays 2-4pm; 6 Washington Place, Meyer, Room 859)  

Brenden Lake
[email protected] (Wednesdays 10-11:00 am, or by appointment; 60 5th Ave., Room 610)

Reuben Feinman
[email protected] (Wednesdays 3-4pm; 60 5th Ave., Room 609)

Anselm Rothe
[email protected] (Fridays 2:30-3:30pm; 6 Washington Place, Meyer, Room 852) (Exception: In the week of Feb 21, Anselm's office hour is on Thursday, Feb 21, 2:30-3:30pm in Room 856.)

Summary: This course surveys the leading computational frameworks for understanding human intelligence and cognition. Both psychologists and data scientists are working with increasingly large quantities of human behavioral data. Computational cognitive modeling aims to understand behavioral data and the mind and brain, more generally, by building computational models of the cognitive processes that produce the data. This course introduces the goals, philosophy, and technical concepts behind computational cognitive modeling.

The lectures cover artificial neural networks (deep learning), reinforcement learning, Bayesian modeling, model comparison and fitting, classification, probabilistic graphical models, and program induction. Modeling examples span a broad set of psychological abilities including learning, categorization, language, memory, decision making, and reasoning. The homework assignments include examining and implementing the models surveyed in class. Students will leave the course with a richer understanding of how computational modeling advances cognitive science, how cognitive science can inform research in machine learning and AI, and how to fit and evaluate cognitive models to understand behavioral data.

Please note that this syllabus is not final and there may be further adjustments.

Pre-requisites

  • Math: If you had linear algebra and calculus as an undergrad, or if you have taken Math Tools in the psychology department, you will be in a good position for approaching the material. Familiarity with probability is also assumed. We will, when needed, review some of the basic technical concepts in lab.
  • Programming: For the homework/assignments, we will assume basic familiarity with programming in Python using the Jupyter Notebook system (http://jupyter.org). We will review some of the programming basics in lab. This is a link to helpful tutorial for learning the basics of Python (http://openbookproject.net/thinkcs/python/english3e/). We recommend Python 3 for use in this course.

Grading

The final grade is based on homeworks (50%), final project (35%), and attendance/participation (15%).

Final Project

The final project proposal is due on Monday, April 1 (one half page written). Please submit via email to [email protected] with the file name lastname1-lastname2-lastname3-ccm-proposal.pdf.

The final project is due on Tuesday, May 14. Please submit via email to [email protected] with the file name lastname1-lastname2-lastname3-ccm-final.pdf.

The final project will be done in groups of 1-4 students. A short paper will be turned in describing the project (approximately 6 pages). The project will represent either an substantial extension of one of the homeworks (e.g., exploring some new aspect of one of the assignments), implementing and extending an existing cognitive modeling paper, or a cognitive modeling project related to your research. Possible project ideas are listed here, but of course you do not have to choose from this list (it is just some examples).

Write-ups should be organized and written as a scientific paper. It must include the following sections: Introduction (with review of related work), Methods/Models, Results, and Discussion/Conclusion. A good example would be to follow the structure of this paper from the class readings:

  • Peterson, J., Abbott, J., & Griffiths, T. (2016). Adapting Deep Network Features to Capture Psychological Representations. Presented at the 38th Annual Conference of the Cognitive Science Society. link here

Lecture schedule

Mondays 1:50-3:25 PM
60 Fifth Ave. Room 110

  • 1/28 Introduction ( slides )
  • 2/4 Neural networks / Deep learning (part 1) ( slides )
    • Homework 1 assigned (Due 2/25) (instructions for accessing here)
  • 2/11 Neural networks / Deep learning (part 2) ( slides )
  • 2/18 PRESIDENT'S DAY - NO CLASS
  • 2/25 Reinforcement learning (part 1) ( slides )
  • 3/4 (class canceled due to snow day)
  • 3/5 Reinforcement learning (part 2) ( slides )
    • Homework 2 assigned (Due 3/27) (instructions for accessing here)
  • 3/11 Reinforcement learning (part 3) ( slides )
  • 3/18 SPRING RECESS - NO CLASS
  • 3/25 Bayesian modeling (part 1) ( slides )
    • Homework 3 assigned (Due 4/10) (instructions for accessing here)
  • 4/1 Bayesian modeling (part 2) ) (see above part 1 for combined slides)
    • Final project proposal due
  • 4/8 Rational vs. mechanistic modeling ( slides )
  • 4/15 Model comparison and fitting, tricks of the trade ( slides )
  • 4/22 Categorization ( slides )
    • Homework 4 assigned (Due TBD) (instructions for accessing here)
  • 4/29 Probabilistic Graphical models
  • 5/6 Program induction and language of thought models
  • 5/13 Computational Cognitive Neuroscience
  • Final project due (Tuesday 5/14)

Lab schedule

Tuesdays 2:40-3:30 PM
60 Fifth Ave. Room 110

  • 1/29 No lab
  • 2/5 Python and Jupyter notebooks review
  • 2/12 Introduction to PyTorch
  • 2/19 HW 1 Review
  • 2/26 No lab
  • (3/5 Make up missed lecture 5 from Mar 4 due to snow, see above)
  • 3/12 HW 2 Review
  • 3/19 SPRING RECESS
  • 3/26 HW 2 Review (again)
  • 4/2 Probability review
  • 4/9 HW 3 Review
  • 4/16 No lab
  • 4/23 No lab
  • 4/30 HW 4 Review
  • 5/7 TBD
  • 5/14 No lab

Readings and slides

Papers are available for download on NYU Classes in the "Resources" folder.

Neural networks and deep learning

  • McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. The Appeal of Parallel Distributed Processing. Vol I, Ch 1.
  • LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature 521:436–44.
  • McClelland, J. L., & Rogers, T. T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4(4), 310-322.
  • Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179-211.
  • Peterson, J., Abbott, J., & Griffiths, T. (2016). Adapting Deep Network Features to Capture Psychological Representations. Presented at the 38th Annual Conference of the Cognitive Science Society.

Reinforcement learning and decision making

  • Gureckis, T.M. and Love, B.C. (2015) Reinforcement learning: A computational perspective. Oxford Handbook of Computational and Mathematical Psychology, Edited by Busemeyer, J.R., Townsend, J., Zheng, W., and Eidels, A., Oxford University Press, New York, NY.
  • Daw, N.S. (2013) "Advanced Reinforcement Learning" Chapter in Neuroeconomics: Decision making and the brain, 2nd edition
  • Niv, Y. and Schoenbaum, G. (2008) “Dialogues on prediction errors” Trends in Cognitive Science, 12(7), 265-72.
  • Nathaniel D. Daw, John P. O'Doherty, Peter Dayan, Ben Seymour & Raymond J. Dolan (2006). Cortical substrates for exploratory decisions in humans. Nature, 441, 876-879.

Bayesian modeling

  • Russel, S. J., and Norvig, P. Artificial Intelligence: A Modern Approach. Chapter 13, Uncertainty.
  • Tenenbaum, J. B., and Griffiths, T. L. (2001). Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24(4), 629-640.
  • Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.
  • Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452.
  • MacKay, D. (2003). Chapter 29: Monte Carlo Methods. In Information Theory, Inference, and Learning Algorithms.

Rational versus mechanistic modeling approaches

  • Jones, M. & Love, B.C. (2011). Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition. Behavioral and Brain Sciences (target article).
  • Griffiths, T.L., Lieder, F., & Goodman, N.D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7(2), 217-229.

Model comparison and fitting, tricks of trade

  • Pitt, M.A. and Myung, J (2002) When a good fit can be bad. Trends in Cognitive Science, 6, 10, 421-425.
  • Roberts, S. & Pashler, H. (2000) How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.
  • [optional] Myung, I.J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology, 47, 90-100.

Probabilistic graphical models

  • Charniak (1991). Bayesian networks without tears. AI Magazine, 50-63.
  • Kemp, C., & Tenenbaum, J. B. (2008). The discovery of structural form. Proceedings of the National Academy of Sciences, 105(31), 10687-10692.
  • [optional] Russel, S. J., and Norvig, P. Artificial Intelligence: A Modern Approach. Chapter 14, Probabilistic reasoning systems.

Program induction and language of thought models

  • Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452.
  • Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of thought. Center for Brains, Minds and Machines (CBMM).
  • Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332-1338.

Course policies

Auditing:
Please email instructor to see if there are available seats. Priority goes to registered students and then by date of audit request.

Collaboration and honor code:
We encourage you to discuss the homework assignments with your classmates. You must run the simulations and complete the write-ups for the homeworks on your own. Under no circumstance should students look at each other's write ups or code, or write-ups or code from previous years.

Late work:
We will take off 10% for each day a homework or final project is late.

Extra credit:
No extra credit will be given, out of interest of fairness.

Laptops in class:
Laptops in class are discouraged. We know many try to take notes on their laptops, but it’s easy to get distracted (social media, etc.). This can also distract everyone behind you! We encourage you to engage with the class and material, and engage with us as the instructors. Ask questions! All slides are posted so there is no need to copy things down, and paper notes are great too.

Preconfigured cloud environment

Students registered for the course have the option of completing homework assignments on their personal computers, or in a cloud Jupyter environment with all required packages pre-installed. Students can log onto the environment using their github login information here assuming they have contacted the TAs and provided their username.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.