Posts

Locality Sensitive Hashing for Maximum Inner Product Search

(Maximum Inner Product Search)      Given a set of points $S \in R^D$  of size N , and a query point $q$ . Determine : $x^{*} = \mathrm{arg} \max_{x \in S}q^Tx$    In this presentation, I will introduce the problem of Maximum Inner Product Search (MIPS) and a series of Locality Sensitive Hashing (LSH) methods to solve MIPS in sub-linear time. Reference papers are included in the below slides.        Presentation slides:  Locality Sensitive Hashing for Maximum Inner Product Search .    Presenter: Dung Le.

Word vector

We present a word vector using in deep learning. A definition and a co-occurence matrix of word vector are briefly introduced. word vector

Exponential family distribution

A summary report about exponential family distribution. In particular, the definition about exponential family and its properties (i.e. convexity, mean and variances, sufficiency, conjugate prior) are presented in the report.  Exponential family distribution

Notes on Convex Optimization

1.1 Sub-level sets. - Definition: The $\alpha$-sublevel set of a function $f$ contains all the feasible input $x$ of $f$ having the value $f(x)<=\alpha$. - Properties: Sublevel sets of a convex function are convex for all $\alpha$ ($f(x) \leq \alpha$, $f(y) \leq \alpha$, $f(\theta x + (1-\theta) y) \leq \alpha $). If $f$ is concave, then its $\alpha$-superlevel set is a convex. - Usage: To establish convexity of a convex set by associating it as a sublevel set of a convex function or a superlevel set of a concave function. 1.2 Epi-graph. - Definition:  epi f = $\{ (x, t) | x \in dom f, f(x) \leq t \}$. Epi means 'above'. - Properties: A function is convex iff its epigraph is a convex set. A function is concave iff its hypograph is a convex set. - Usage: Many results for convex functions can be proved (or interpreted) geometrically using epigraphs, and applying results for convex sets. 1.3 Closed set.  - Definition: A set is closed if it contains its boundary

[PAPER PRESENTATION] Steering User Behavior with Badges - WWW 2013

Title : Steering User Behavior with Badges - WWW 2013 Abstract : An increasingly common feature of online communities and social media sites is a mechanism for rewarding user achievements based on a system of badges. Badges are given to users for particular contributions to a site, such as performing a certain number of actions of a given type. They have been employed in many domains, including news sites like the Huffington Post, educational sites like Khan Academy, and knowledge-creation sites like Wikipedia and Stack Overflow. At the most basic level, badges serve as a summary of a user’s key accomplishments; however, experience with these sites also shows that users will put in non-trivial amounts of work to achieve particular badges, and as such, badges can act as powerful incentives. Thus far, however, the incentive structures created by badges have not been well understood, making it difficult to deploy badges with an eye toward the incentives they are likely to create. In th
OPTIMIZATION ON SPHERE Problem:  Given a unit sphere $S^n$ in $R^{n+1}$ and a smooth function $f: S^n \rightarrow R$ Find an element $x_{*} \in S^n$ such that there is a neighbourhood $V$ of $x_{*}$ with $f(x_{*}) \leq f(x), \forall x \in V$. In this slides, I will introduce some notations and the line-search method to perform minimization task on sphere. slides: CSE_SMU seminar 30/01/2016

MM Algorithm[unfinished]

MM algorithm 1. Overview of MM algorithm The MM algorithm is not an algorithm, but a prescription for constructing optimization algorithms.  The EM algorithm from statistics is a special case. An MM algorithm operates by creating a surrogate function that minorizes or majorizes the objective function. When the surrogate function is optimized, the objective function is driven uphill or downhill as needed. In minimization MM stands for majorize/minimize, and in maximization MM stands for minorize/maximize. 2. Rationale for the MM principle It can generate an algorithm that avoids matrix inversion.  It can separate the parameters of a problem.  It can linearize an optimization problem.  It can deal gracefully with equality and inequality constraints.  It can turn a non-differentiable problem into a smooth problem. 3. Majorization and definition of algorithms A function $g(θ | θ^n)$ is said to majorize the function f(θ) at θn provided: