Skip to main content

    *Recommended for You: Which stock is best to buy today?


How Does Stock Forecast Model Work?


Decision making

AI BASED RISK MAP

how to read graph

case study

solving risk problems

fuRTher reading



DECISION MAKING


The stock price realization is a decision making process between multiple investors each of which controls a subset of design variables and seeks to minimize its cost function subject to future forecast constraints. That is, investors act like players in a game; they cooperate to achieve a set of overall goals. Machine Learning utilizes multiple learning algorithms to obtain better predictive powers. In our research, we utilize machine learning to combine the results from the Neural Network and Support Vector Machines. AI based risk heat map is a tool used to present the results of a invest-risk assessment process visually and in a meaningful and concise way.

HOW TO READ GRAPH cASE STUDY


Dominant strategy: STRONG BUY
n parameter (The time series to forecast),

n+1 now+1 day
n+7 now+7 day
n+15 now+15 day
n+3m now+3 month
n+6m now+6 month
n+1y  now +1year

x axis:Likelihood %;
y axis:Potential Impact %

SOLVING RISK PROBLEMS


In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this project is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that estimate such gradient, update the policy in the descent direction, and update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online forecast application.

FURTHER READING


Deep Reinforcement Learning in Large Discrete Action Spaces
Applying reasoning in an environment with a large number of discrete actions to bring reinforcement learning to a wider class of problems.


Deep Reinforcement Learning with Attention for Slate Markov Decision Processes with High-Dimensional States and Actions
Introducing slate Markov Decision Processes (MDPs), a formulation that allows reinforcement learning to be applied to recommender system problems.


Massively Parallel Methods for Deep Reinforcement Learning
Presenting the first massively distributed architecture for deep reinforcement learning.


Adaptive Lambda Least-Squares Temporal Difference Learning
Learning to select the best value of λ (which controls the timescale of updates) for TD(λ) to ensure the best result when trading off bias against variance. 


Learning from Demonstrations for Real World Reinforcement Learning
Presenting Deep Q-learning from Demonstrations (DQfD), an algorithm that leverages data from previous control of a system to accelerate learning.


Value-Decomposition Networks For Cooperative Multi-Agent Learning
Studying the problem of cooperative multi-agent reinforcement learning with a single joint reward signal.


Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
Demonstrating an alternative view of the training of GANs.


Risk-Constrained Reinforcement Learning with Percentile Risk Criteria
Presenting efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs) and demonstrating their effectiveness in an optimal stopping problem and an online marketing application.



    *Recommended for You: What stock should I invest in today?


 *AC INVEST | Legal Disclaimer | NYSE | NASDAQ | LSE | NSE |