Posts

[Paper Reading] In-sample and Out-of-sample Sharpe Ratios of Multi-factor Asset Pricing Models

Image
Title:  In-sample and Out-of-sample Sharpe Ratios of Multi-factor Asset Pricing Models Author(s):  Raymond Kan*, Xiaolu Wang, Xinghua Zheng Year:  2019, visited the version revised on 22 Feb 2021 URL:  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3454628 Definitions Optimal portfolio weight: For a mean-variance investor who wants to hold a portfolio with a target standard deviation of $\sigma$, then their optimal portfolio has weights of  $$ w^* = \frac{\sigma}{\theta} \Sigma^{-1} \mu, $$              where $\mu = \mathbb{E}[r_t]$, $\Sigma = Var[r_t]$, and $r_t$ is the return of risky assets in excess of risk-free rate. $\theta$ is the Sharpe ratio. Sharpe Ratio: $\theta = \sqrt{\mu' \Sigma^{-1}\mu}$. In practice, the investor does not know the mean and covariance matrix of the factors; he has to estimate $\theta$ using historical data. Thus, we have the in-sample and out-of-sample Sharpe ratio as follows. In-sample...

[Literature Review] A Literature Review on How to Measure Loan Similarity

Image
The mainstreams of research in P2P lending can be divided into credit risk, profit scoring, and portfolio optimization. The first two are pretty straightforward that credit risk is to derive the default probability of individual loans and profit scoring is to predict the expected return of individual loans. Based on the prediction, investors can identity potential low-risk and profitable loans to invest in. Some researchers also see the investment in P2P as a portfolio optimization problem. In this regard, researchers must know the return distribution of loans, i.e., the expected return AND variance of individual loans; thereby, constructing investment portfolio based on the predicted distribution.   To the best of my knowledge, there are two ways to derive the return distribution of loans in the literature, instance-based credit risk assessment framework and mean-variance estimation. The mean-variance estimation appears in Babaei et al.'s paper in 2020 [1], where they use mac...

[Paper Reading] Instance-based Credit Risk Assessment for Investment Decisions in P2P Lending

Image
Title:  Instance-based Credit Risk Assessment for Investment Decisions in P2P Lending Author(s):  Yanhong Guo, Wenjun Zhou, Chunyu Luo, Chuanren Liu, Hui Xiong Journal: European Journal of Operation Research Year: 2015 URL:  https://www.sciencedirect.com/science/article/abs/pii/S0377221715004610 Abstract Objective:  Effective allocation of personal investors' money across different loans by accurately assessing the credit risk of each loan. Key contributions:  Guo et al. proposed a data-driven investment decision-making framework for the P2P market. They designed an instance-based credit risk assessment model, which has the ability to evaluate the return and risk of each individual loan. Given the estimate of return and risk, they formulated the investment decision in P2P lending as a portfolio optimization problem with boundary constraints. Data Description 2016 loan samples from Lending Club 4128 loan samples from Prosper Features include the borrower's credi...

Online Optimization Specialization (4/4): Review of 'Projection-free Online Learning'

Image
Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title:  Projection-free Online Learning Author(s):  Elad Hazan, Satyen Kale URL:  https://icml.cc/2012/papers/292.pdf Abstract The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth convex optimization...

Online Optimization Specialization (3/4): Review of 'Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization'

Image
Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title:  Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization Author(s):  Jacob Abernethy, Elad Hazan, Alexander Raklin URL:  http://web.eecs.umich.edu/~jabernet/123-Abernethy.pdf Abstract We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal $O*(\sqrt{T})$ regret. The setting is a natural generalization of the stochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of rece...

Online Optimization Specialization (2/4): Review of 'Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization'

Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title:  Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization Author(s):  Alexander Rakhlin, Ohad Shamir, Karthik Sridharan URL:  https://icml.cc/2012/papers/261.pdf Abstract Stochastic gradient descent (SDG) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be $\mathcal{O}(log(T)/T)$, by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm...

Online Optimization Specialization (1/4): Review of 'Online Convex Programming and Generalized Infinitesimal Gradient Ascent' by Martin Zinkevich

Specialization Introduction This specialization covers five selected grounding papers in online optimization. In each blog, I will discuss one paper, where I aim to include Brief introduction and summary to the paper Key takeaways of the paper Notice that all the discussion and summary in this specialization are based on the reviewed papers. None of the algorithms or theorems is proposed by myself.  Summary Paper Detail Title: Online Convex Programming and Generalized Infinitesimal Gradient Ascent Author(s): Martin Zinkevich URL:  https://www.aaai.org/Library/ICML/2003/icml03-120.php Abstract Convex programming involves a convex set $F \subseteq R^n$ and a convex function $c : F \to \mathbb{R}$. The goal of convex programming is to find a point in F which minimizes c. In this paper, we introduce online convex programming.  In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F ...