Publications

Research Reports

1-Deep Reinforcement Learning in Large Discrete Action Spaces
Applying reasoning in an environment with a large number of discrete actions to bring reinforcement learning to a wider class of problems.

2-Deep Reinforcement Learning with Attention for Slate Markov Decision Processes with High-Dimensional States and Actions
Introducing slate Markov Decision Processes (MDPs), a formulation that allows reinforcement learning to be applied to recommender system problems.

3-Massively Parallel Methods for Deep Reinforcement Learning
Presenting the first massively distributed architecture for deep reinforcement learning.

4-Adaptive Lambda Least-Squares Temporal Difference Learning
Learning to select the best value of λ (which controls the timescale of updates) for TD(λ) to ensure the best result when trading off bias against variance. 

5-Learning from Demonstrations for Real World Reinforcement Learning
Presenting Deep Q-learning from Demonstrations (DQfD), an algorithm that leverages data from previous control of a system to accelerate learning.

6-Value-Decomposition Networks For Cooperative Multi-Agent Learning
Studying the problem of cooperative multi-agent reinforcement learning with a single joint reward signal.

7-Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
Demonstrating an alternative view of the training of GANs.

8-Risk-Constrained Reinforcement Learning with Percentile Risk Criteria
Presenting efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs) and demonstrating their effectiveness in an optimal stopping problem and an online marketing application.


9-How Does Rating Model Work?
Ratings are forward-looking opinions about the ability and willingness of debt issuers, like corporations or governments, to meet their financial obligations on time and in full. They provide a common and transparent global language for investors and other market participants, corporations and governments, and are one of many inputs they can consider as part of their decision-making processes.

Case Studies

(Please wait while the list is being populated)

Books

1-Çetinkaya,Adem (2010).Calculus:For Economics.CSIP.Cambridge

2-Çetinkaya,Adem (2011).Speculative Growth: Extreme Stock Market Valuations.CSIP:London

3-Çetinkaya,Adem (2017).The Price Theory.CSIP.Cambridge

4-Çetinkaya,Adem (2014).Probability Theory.CSIP.Cambridge
Our Mission

As AC Investment Research, our goal is to do fundamental research, bring forward a totally new, scientific technology and create frameworks for objective forecasting using machine learning and fundamentals of Game Theory.

301 Massachusetts Avenue Cambridge, MA 02139 667-253-1000 pr@ademcetinkaya.com

Follow Us | Send Feedback