Inspired by related psychological theory, in computer science, reinforcement learning is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Computer science (or computing science) is the study and the Science of the theoretical foundations of Information and Computation and their Machine learning is a subfield of Artificial intelligence that is concerned with the design and development of Algorithms and techniques that allow computers to "learn" Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. In economics and game theory, reinforcement learning is considered as a boundedly rational interpretation of how equilibrium may arise. Economics is the social science that studies the production distribution, and consumption of goods and services. Game theory is a branch of Applied mathematics that is used in the Social sciences (most notably Economics) Biology, Engineering, Some models of Human behavior in the Social sciences assume that Humans can be reasonably approximated or described as " rational " entities (see

The environment is typically formulated as a finite-state Markov decision process (MDP), and reinforcement learning algorithms for this context are highly related to dynamic programming techniques. Markov decision processes (MDPs, named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and In Mathematics and Computer science, dynamic programming is a method of solving problems exhibiting the properties of Overlapping subproblems and State transition probabilities and reward probabilities in the MDP are typically stochastic but stationary over the course of the problem.

Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Supervised learning is a Machine learning technique for learning a function from training data Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The exploration vs. exploitation trade-off in reinforcement learning has been mostly studied through the multi-armed bandit problem. A multi-armed bandit, also sometimes called a K-armed bandit is a simple Machine learning problem based on an analogy with a traditional Slot machine (one-armed bandit

Formally, the basic reinforcement learning model consists of:

1. a set of environment states S;
2. a set of actions A; and
3. a set of scalar "rewards" in $\Bbb{R}$.

At each time t, the agent perceives its state $s_t \in S$ and the set of possible actions A(st). It chooses an action $a \in A(s_t)$ and receives from the environment the new state st + 1 and a reward rt + 1. Based on these interactions, the reinforcement learning agent must develop a policy $\pi:S\rightarrow A$ which maximizes the quantity $R=r_0 + r_1 + \cdots + r_n$ for MDPs which have a terminal state, or the quantity

 R = ∑ γtrt t

for MDPs without terminal states (where $0\leq\gamma\leq1$ is some "future reward" discounting factor).

Thus, reinforcement learning is particularly well suited to problems which include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon and chess (Sutton 1998, Chapter 11). Robot control is the study of controlling robots. See also Control theory Mobile robot navigation Backgammon is a Board game for two players in which the playing pieces are moved according to the roll of Dice. Chess is a recreational and competitive Game played between two players.

## Algorithms

After we have defined an appropriate return function to be maximized, we need to specify the algorithm that will be used to find the policy with the maximum return.

The naive brute force approach entails the following two steps: a) For each possible policy, sample returns while following it. b) Choose the policy with the largest expected return. One problem with this is that the number of policies can be extremely large, or even infinite. Another is that returns might be stochastic, in which case a large number of samples will be required to accurately estimate the return of each policy. These problems can be ameliorated if we assume some structure and perhaps allow samples generated from one policy to influence the estimates made for another. The two main approaches for achieving this are value function estimation and direct policy optimization.

Value function approaches do this by only maintaining a set of estimates of expected returns for one policy π (usually either the current or the optimal one). In such approaches one attempts to estimate either the expected return starting from state s and following π thereafter,

V(s) = E[R | s,π],

or the expected return when taking action a in state s and following π; thereafter,

Q(s,a) = E[R | s,π,a]

If someone gives us Q for the optimal policy, we can always choose optimal actions by simply choosing the action with the highest value at each state. In order to do this using V, we must either have a model of the environment, in the form of probabilities P(s' | s,a), which allow us to calculate Q simply through

 Q(s,a) = ∑ V(s')P(s' | s,a), s'

or we can employ so-called Actor-Critic methods, in which the model is split into two parts: the critic, which maintains the state value estimate V, and the actor, which is responsible for choosing the appropriate actions at each state.

Given a fixed policy π, Estimating $E[R|\cdot]$ for γ = 0 is trivial, as one only has to average the immediate rewards. The most obvious way to do this for γ > 0 is to average the total return after each state. However this type of Monte Carlo sampling requires the MDP to terminate.

Thus carrying out this estimation for γ > 0 in the general does not seem obvious. In fact, it is quite simple once one realises that the expectation of R forms a recursive Bellman equation:

E[R | st] = rt + γE[R | st + 1]

By replacing those expectations with our estimates, V, and performing gradient descent with a squared error cost function, we obtain the temporal difference learning algorithm TD(0). A Bellman equation (also known as a dynamic programming equation) named after its discoverer Richard Bellman, is a necessary condition for optimality associated For the analytical method called "steepest descent" see Method of steepest descent. Temporal difference learning is a prediction method It has been mostly used for solving the Reinforcement learning problem In the simplest case, the set of states and actions are both discrete and we maintain tabular estimates for each state. Similar state-action pair methods are Adaptive Heuristic Critic(AHC), SARSA and Q-Learning. SARSA (State-Action-Reward-State-Action is an Algorithm for learning a Markov Decision Process policy used in the Reinforcement Learning area of Machine Q-learning is a Reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and All methods feature extensions whereby some approximating architecture is used, though in some cases convergence is not guaranteed. In the absence of a more specific context convergence denotes the approach toward a definite value as time goes on or to a definite point a common view or opinion or The estimates are usually updated with some form of gradient descent, though there have been recent developments with least squares methods for the linear approximation case. The method of least squares is used to solve Overdetermined systems Least squares is often applied in statistical contexts particularly Regression analysis.

The above methods not only all converge to the correct estimates for a fixed policy, but can also be used to find the optimal policy. This is usually done by following a policy π that is somehow derived from the current value estimates, i. e. by choosing the action with the highest evaluation most of the time, while still occasionally taking random actions in order to explore the space. Proofs for convergence to the optimal policy also exist for the algorithms mentioned above, under certain conditions. However, all those proofs only demonstrate asymptotic convergence and little is known theoretically about the behaviour of RL algorithms in the small-sample case, apart from within very restricted settings.

An alternative method to find the optimal policy is to search directly in policy space. Policy space methods define the policy as a parameterised function π(s,θ) with parameters θ. Commonly, a gradient method is employed to adjust the parameters. However, the application of gradient methods is not trivial, since no gradient information is assumed. Rather, the gradient itself must be estimated from noisy samples of the return. Since this greatly increases the computational cost, it can be advantageous to use a more powerful gradient method than steepest gradient descent. Policy space gradient methods have received a lot of attention in the last 5 years and have now reached a relatively mature stage, but they remain an active field. There are many other approaches, such as simulated annealing, that can be taken to explore the policy space. Simulated annealing (SA is a generic probabilistic Meta-algorithm for the Global optimization problem namely locating a good approximation to the Other direct optimization techniques, such as evolutionary computation are used in evolutionary robotics. In Computer science evolutionary computation is a subfield of Artificial intelligence (more particularly Computational intelligence) that involves Evolutionary Robotics ( ER) is a methodology that uses Evolutionary computation to develop controllers for Autonomous robots Algorithms

## Current research

Current research topics include: Alternative representations (such as the Predictive State Representation approach), gradient descent in policy space, small-sample convergence results, algorithms and convergence results for partially observable MDPs, modular and hierarchical reinforcement learning. In Computer science, a Predictive State Representation (PSR is a Dynamical system representation that keeps track of the state of the system using predictions For the analytical method called "steepest descent" see Method of steepest descent. A Partially Observable Markov Decision Process (POMDP is a generalization of a Markov Decision Process. Recently, reinforcement learning has been used in the domain of Psychology to explain human learning and performance. In particular, it has been used in cognitive models that simulate human performance during problem solving and/or skill acquisition (e. g. , Gray, Sims, Fu, & Schoelles, 2006; Fu & Anderson, 2006). It was also used to propose a model of the human error-processing system (Holroyd & Coles, 2002). Multiagent or Distributed Reinforcement Learning is also a topic of interest in current research in this field.

## References

• Kaelbling, Leslie P. ; Michael L. Littman; Andrew W. Michael L Littman is a Computer scientist. He works mainly in Reinforcement learning, but has done work in Machine learning, Game theory, Moore (1996). "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research 4: 237–285.
• Bertsekas, Dimitri P. ; John Tsitsiklis (1996). Neuro-Dynamic Programming. Nashua, NH: Athena Scientific. ISBN 1-886529-10-8.
• Ron Sun, E. Ron Sun is a Cognitive scientist and currently Professor of Cognitive Science at Rensselaer Polytechnic Institute, and formerly the James C Merrill, and T. Peterson, From implicit skills to explicit knowledge: A bottom-up model of skill learning. Cognitive Science, Vol. 25, No. 2, pp. 203-244. 2001. http://www.cogsci.rpi.edu/~rsun/sun.cs01.pdf
• Ron Sun, P. Ron Sun is a Cognitive scientist and currently Professor of Cognitive Science at Rensselaer Polytechnic Institute, and formerly the James C Slusarz, and C. Terry, The interaction of the explicit and the implicit in skill learning: A dual-process approach . Psychological Review, Vol. 112, No. 1, pp. 159-192. 2005. http://www.cogsci.rpi.edu/~rsun/sun-pr2005-f.pdf