Support

Support Options

Submit a Support Ticket

 

[Illinois]: Perturbative Reinforcement Learning Using Directed Drift

By AbderRahman N Sobh

University of Illinois at Urbana-Champaign

This tool trains two-layered networks of sigmoidal units to associate patterns using a real-valued adaptation of the directed drift algorithm.

Launch Tool

You must login before you can run this tool.

Version 1.0c - published on 19 Aug 2013

doi:10.4231/D3MP4VN4X cite this

This tool is closed source.

View All Supporting Documents

See also

No results found.

Default Input Simulation

Category

Tools

Published on

Abstract

From Tutorial on Neural Systems Modeling, Chapter 7:

In the directed drift algorithm (Venkatesh 1993), input patterns are presented to the network, and one or sev­eral randomly chosen weights have their binary values flipped if the output is in error, but the weights are left unperturbed otherwise. Directed drift is proven to work in this restricted context (Venkatesh 1993). We explore its use for real-valued weights in the next example.

Sponsored by

NanoBio Node, University of Illinois Champaign-Urbana

Cite this work

Researchers should cite this work as follows:

  • Tutorial on Neural Systems Modeling, Copyright 2010 Sinauer Associates Inc. Author: Thomas J. Anastasio
  • AbderRahman N Sobh (2013), "[Illinois]: Perturbative Reinforcement Learning Using Directed Drift," http://nanohub.org/resources/pertdd. (DOI: 10.4231/D3MP4VN4X).

    BibTex | EndNote

Tags

No classroom usage data was found. You may need to enable JavaScript to view this data.

nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.