1. What is feature scaling? Feature scaling is a method used to normalize all independent values of our data. We also call it as data normalization and is generally performed before running machine learning algorithms. 2. Why do we need to use feature scaling? In practice the range of raw data is very wide and hence the object functions will not work properly (means that it will stuck at local optimum), or they are time-consuming without normalization. For example: K-Means, might give you totally different solutions depending on the preprocessing methods that you used. This is because an affine transformation implies a change in the metric space: the Euclidean distance between two samples will be different after that transformation. When we apply gradient descent, feature scaling also helps it to converge much faster that without normalized data. With and without feature scaling in gradient descent 3. Methods in feature scaling? Rescaling The simp...
Problem: Write a program to find all pairs of integers whose sum is equal to a given number. For example if input integer array is {1, 2, 3, 4, 5}, and the given sum is 5, the code should return two pairs {1, 4} and {2, 3}. This is a common interview question related to array structure. A quick search on Google can return several pages discussing the problem, to list but a few [1,2, 3, 4]. Those pages only discuss the solutions at the conceptual level, without supporting comparing numbers, although they also provide the code. In this note I would re-implement all these approaches and compare their efficiency in practice. Common approaches There are four approaches which are often mentioned together: Brute-force, HashSet, sorting with search and sorting with binary search. Each has its own pros and cons. Interested readers can find the details in the list of sources below. Here is a summary of the storage complexity and computational complexity. Approach Storage co...
1.1 Sub-level sets. - Definition: The $\alpha$-sublevel set of a function $f$ contains all the feasible input $x$ of $f$ having the value $f(x)<=\alpha$. - Properties: Sublevel sets of a convex function are convex for all $\alpha$ ($f(x) \leq \alpha$, $f(y) \leq \alpha$, $f(\theta x + (1-\theta) y) \leq \alpha $). If $f$ is concave, then its $\alpha$-superlevel set is a convex. - Usage: To establish convexity of a convex set by associating it as a sublevel set of a convex function or a superlevel set of a concave function. 1.2 Epi-graph. - Definition: epi f = $\{ (x, t) | x \in dom f, f(x) \leq t \}$. Epi means 'above'. - Properties: A function is convex iff its epigraph is a convex set. A function is concave iff its hypograph is a convex set. - Usage: Many results for convex functions can be proved (or interpreted) geometrically using epigraphs, and applying results for convex sets. 1.3 Closed set. - Definition: A set is closed if it contains its bo...
Comments
Post a Comment