Support Options

Submit a Support Ticket

Home Tools [Illinois]: Perturbative Reinforcement Learning Using Directed Drift About

[Illinois]: Perturbative Reinforcement Learning Using Directed Drift

By AbderRahman N Sobh1, Jessica S Johnson1, NanoBio Node1

1. University of Illinois at Urbana-Champaign

This tool trains two-layered networks of sigmoidal units to associate patterns using a real-valued adaptation of the directed drift algorithm.

Launch Tool

You must login before you can run this tool.

Version 1.0d - published on 06 Aug 2014

doi:10.4231/D39P2W68W cite this

Open source: license | download

View All Supporting Documents

    Default Input Simulation



Published on


From Tutorial on Neural Systems Modeling, Chapter 7: In the directed drift algorithm (Venkatesh 1993), input patterns are presented to the network, and one or sev­eral randomly chosen weights have their binary values flipped if the output is in error, but the weights are left unperturbed otherwise. Directed drift is proven to work in this restricted context (Venkatesh 1993). We explore its use for real-valued weights in the next example.

Sponsored by

NanoBio Node, University of Illinois Champaign-Urbana

Cite this work

Researchers should cite this work as follows:

  • Tutorial on Neural Systems Modeling, Copyright 2010 Sinauer Associates Inc. Author: Thomas J. Anastasio
  • AbderRahman N Sobh; Jessica S Johnson; NanoBio Node (2014), "[Illinois]: Perturbative Reinforcement Learning Using Directed Drift," (DOI: 10.4231/D39P2W68W).

    BibTex | EndNote

Tags, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.