• Stars
    star
    202
  • Rank 187,597 (Top 4 %)
  • Language
    Jupyter Notebook
  • Created over 6 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

NYU PSYCH-GA 3405.002 / DS-GS 3001.006 : Computational cognitive modeling

Archive - Computational cognitive modeling

Instructors: Brenden Lake and Todd Gureckis

Lecture schedule

Live lectures are

  • Thurs. Jan 25: Introduction (slides)
  • Thurs. Feb. 1: Neural networks / Deep learning (part 1) (slides)
    • Homework 1 assigned (Due 2/15) (instructions for accessing here)
  • Thurs. Feb. 8: Neural networks / Deep learning (part 2) (slides)
  • Thurs. Feb. 15: Reinforcement learning (part 1)
  • Thurs. Feb. 22: Reinforcement learning (part 2)
  • Homework 2 assigned (Due 3/7) (instructions for accessing here)
  • Thurs. Feb. 29: Reinforcement learning (part 3)
  • Thurs. Mar 7: Bayesian modeling (part 1)
  • Homework 3 assigned (Due 3/28) (instructions for accessing here)
  • Thurs. Mar 14: Bayesian modeling (part 2)(same slides as part 1)
  • Thurs. Mar. 21: No class, Spring break
  • Thurs. Mar. 28: Model comparison and fitting, tricks of the trade
  • Thurs. Apr 4: Categorization (slides)
    • Project proposal is due
    • Homework 4 assigned (Due 4/18) (instructions for accessing here)
  • Thurs. Apr 11: Probabilistic Graphical models
  • Thurs. Apr 18: Information sampling and active learning
  • Thurs. April 25: Program induction and language of thought models
  • Thurs. May 2: Computational Cognitive Neuroscience
  • Final project due (Due May 8)

Lab schedule

Fridays 12:30-1:20PM (in person or zoom)

  • Fri. Jan 26, Python and Jupyter notebooks review
  • Fri. Feb 2, Introduction to PyTorch
  • Fri. Feb 9, HW 1 Review
  • Fri. Feb 16, No lab
  • Fri. Feb 23, Reinforcement learning
  • Fri. Mar 1, HW 2 review
  • Fri. Mar 8, Probability Review
  • Fri. Mar 15, HW 3 Review
  • Fri. Mar 22, No lab (Spring break)
  • Fri. Mar 29, No lab
  • Fri. Apr 5, TBD
  • Fri. Apr 12, HW 4 Review
  • Fri. Apr 19, TBD
  • Fri. Apr 26, TBD
  • Fri. May 3, TBD

Readings

For each major topic, there are assigned readings that go with the lectures. The papers were selected to be fundamental readings on this topic any computational cognitive scientist would be expected to know. You should aim to read these over the semester, especially during the periods when we are covering the same topic. The papers should be downloadable on Google Scholar via NYU Library. However, they are available for download on EdStem website under the "resources" tab (see the down-pointing arrow along the top bar or this link).

Neural networks and deep learning

  • McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. The Appeal of Parallel Distributed Processing. Vol I, Ch 1.
  • LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature 521:436โ€“44.
  • McClelland, J. L., & Rogers, T. T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4(4), 310-322.
  • Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179-211.
  • Peterson, J., Abbott, J., & Griffiths, T. (2016). Adapting Deep Network Features to Capture Psychological Representations. Presented at the 38th Annual Conference of the Cognitive Science Society.

Reinforcement learning and decision making

  • Gureckis, T.M. and Love, B.C. (2015) Reinforcement learning: A computational perspective. Oxford Handbook of Computational and Mathematical Psychology, Edited by Busemeyer, J.R., Townsend, J., Zheng, W., and Eidels, A., Oxford University Press, New York, NY.
  • Daw, N.S. (2013) "Advanced Reinforcement Learning" Chapter in Neuroeconomics: Decision making and the brain, 2nd edition
  • Niv, Y. and Schoenbaum, G. (2008) โ€œDialogues on prediction errorsโ€ Trends in Cognitive Science, 12(7), 265-72.
  • Nathaniel D. Daw, John P. O'Doherty, Peter Dayan, Ben Seymour & Raymond J. Dolan (2006). Cortical substrates for exploratory decisions in humans. Nature, 441, 876-879.

Bayesian modeling

  • Russell, S. J., and Norvig, P. Artificial Intelligence: A Modern Approach. Chapter 13, Uncertainty.
  • Tenenbaum, J. B., and Griffiths, T. L. (2001). Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24(4), 629-640.
  • Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.
  • Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452.
  • MacKay, D. (2003). Chapter 29: Monte Carlo Methods. In Information Theory, Inference, and Learning Algorithms.

Rational versus mechanistic modeling approaches

  • Jones, M. & Love, B.C. (2011). Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition. Behavioral and Brain Sciences (target article).
  • Griffiths, T.L., Lieder, F., & Goodman, N.D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7(2), 217-229.

Model comparison and fitting, tricks of trade

  • Wilson, R.C. and Collins, A.G.E. (2019). Ten simple rules for the computational modeling of behavioral data. eLife 2019;8:e49547
  • Pitt, M.A. and Myung, J (2002) When a good fit can be bad. Trends in Cognitive Science, 6, 10, 421-425.
  • Roberts, S. & Pashler, H. (2000) How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.
  • [optional] Myung, I.J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology, 47, 90-100.

Probabilistic graphical models

  • Charniak (1991). Bayesian networks without tears. AI Magazine, 50-63.
  • Kemp, C., & Tenenbaum, J. B. (2008). The discovery of structural form. Proceedings of the National Academy of Sciences, 105(31), 10687-10692.
  • [optional] Russell, S. J., and Norvig, P. Artificial Intelligence: A Modern Approach. Chapter 14, Probabilistic reasoning systems.

Program induction and language of thought models

  • Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452.
  • Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of thought. Center for Brains, Minds and Machines (CBMM).
  • Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332-1338.

Computational Cognitive Neuroscience

  • Kreigeskorte, N. and Douglas, P.K. (2018) Cognitive computational neuroscience. Nature Neuroscience. 21(9): 1148-1160. doi:10.1038/s41593-018-0210-5
  • Turner, B.M., Forstmann, B.U., Love, B.C., Palmeri, T.J., Van Maanen, L. (2017). Approaches to analysis in model-based cognitive neuroscience. Journal of Mathematical Psychology. 76(B), 65-79.