

Project 1. MonteCarlo Simulation for an Asset Pricing Model
In this small project I estimate the stationary density of a certain heteroskedastic AR(1) process. The problem appeared in the context of a consumptionbased asset pricing model proposed by Campbell and Cochrane in 1999. MonteCarlo is applied to simulate a large number of paths for the process and terminal values were used as input for a kernel density estimator. The simulation procedure is wrapped into a programmatically created Matlab GUI interface which can be launched by running the *.m file. Below you can download:
Project 2. Pricing of European and American Contingent Claims: Finite Difference Method
In this project, I implement the algorithms for the pricing of simple European and American contingent claims described in Chapter 5 of Lamberton & Lapeyre. The underlying asset conforms to the standard BlackScholes model with constant drift and diffusion. The main question of interest is how convergence depends on the grid parameters and on the parameter theta, which distinguishes explicit and implicit schemes. The results are wrapped into Matlab GUI interface, which you can see by downloading the Matlab files below and launching the file Finite_Difference_Pricing.m. For the detailed description, please see the project report file below.
Project 3. Neural Networks in Business: Building a Movie Recommendation System In this project, I enhance the model proposed in the paper of Agarwal & Merugu in 2007. Their method is a powerful blend of both supervised and unsupervised learning approaches. It can be used, among other things, to build a movie recommendation system. Suppose that we have a total of M users rate a total of N movies. As a result, we get a M*N matrix of movie ratings, which is very sparse because each user rates only a fraction of N movies. Our purpose is to fill in the missing ratings. The unsupervised approach does not use any extra information about the movies or users. It fills in the gaps by creating blocks of similar ratings inside the matrix. It is achieved by iteratively regrouping ("coclustering") the rows and columns:
For instance, the reddest block corresponds to high ratings. It may be caused by that the fact that the corresponding rows (users) come from the XYZ country and give high ratings to the movies (columns) produced in XYZ. However, the unsupervised approach doesn't use that sort of information and acts only on the basis of known ratings. The supervised approach is betterknown: it simply predicts the rating based on the known attributes ("covariates") of users and movies such as age, occupation, movie genre, etc. The method of Agarwal & Merugu (video presentation) combines both approaches where the supervised part is implemented via some GLM model (e.g. logistic regression). My idea is to try to improve the predictive power by replacing the GLM part with a Neural Network. The latter uses extra attributes ("linear features") derived from the original covariates. The GLM model is a particular case of Neural Network with only one linear feature while Neural Network can use an arbitrary number of features. The model is applied to build a movie recommendation system for a large dataset. Compared to the GLMbased model, there is little increase in predictive power. However, the main reason for that is that the additional information contained in the extra features of the Neural Network is similar to the information obtained via unsupervised coclustering. In particular, it means that using GLM with coclustering results in about the same quality of prediction as using a Neural Network without coclustering. Of course, this is true only for the given dataset and for some other applications Neural Network enhancement can provide a significant gain on top of coclustering (and viceversa). Below you can download: The Matlab code will work only if you have Neural Networks toolbox installed.

