Necessary and Sufficient Conditions for Unrestricted Optimization Problems

In a previous post we have seen the conditions in which extremum values are guaranteed to exist. Nonetheless, there are cases in which the extremum values are there but the Weierstrass-Extremum value theorem didn’t tell us that much. The function x^2 has a global minimum on the interval (-1,1), for example. Further, the function

g(x) = \displaystyle \frac{1}{\sqrt{2 \pi}} e^{-(\frac{x^2}{2})} - \frac{1}{\sqrt{2\pi}}  e^{-(\frac{(x-2)^2}{2})}.

has a global maximum and a global minimum on the open interval (-5,5).


Continue reading


Existence of Extremum Values

At the end of the day we are all about extremes. We want to know the best we can do as much as the worst it can happen. How to be sure a function attains a maximum or a minimum value? Do they always exist?

Some intuition

Consider the function f:(0,1)\rightarrow \mathbb{R} defined as:

\displaystyle f(x) = x^3.

This function has neither a maximum nor a minimum value. In order to see that, suppose it has a maximum at z < 1. Then there exists \delta > 0 such that 1-z = \delta. If we set z^\star = 1  - \delta/ 2 we have f(z^\star) > f(z), which is a contradiction. A similar argument holds for the non-existence of a minimum.

If instead of domain (0,1) we define the function f at [0,1], then f attains a maximum and a minimum in its domain. The difference between the two domains is that the second it is a closed set, i.e., it contains all its accumulation (cluster) points.

Continue reading

Conjugated Directions Methods – Part I

Let’s say we have the following unrestricted quadratic problem to solve:

\displaystyle \min_{x}{x^TQx - b^Tx} \quad \quad (P)

We further assume that Q \in \mathbb{R}^{n+1} is a symmetric positive definite matrix.

Let’s say that we have an iterative algorithm to find the solution x^\star of problem (P). The algorithm works by selecting a “good” direction d_k at iteration k and then walk some amount \alpha _k in that direction.

x_{k+1} = x_k + \alpha _k d_k.

Moreover, let’s say that we know how to compute \alpha _k in order to minimize the problem:

\displaystyle f(x_{k+1}) = \min_{\alpha _k}{f(x_k + \alpha _k d_k)}.

Continue reading

Asymptotic Notation – The Family-O

In mathematical analysis there exist two notations that simplify a lot the writing. The big-O notation \mathcal{O} is used to represent a class of functions that respects a particular property.

f(x) \in \mathcal{O} \big( g(x) \big) \; \Leftrightarrow \; \exists \; N,x_0 \: : \: |f(x)| \leq N |g(x)| \: \forall x \geq x_0 .

Thus, a function f(x) is in the class of functions \mathcal{O}\big( g(x) \big) if it is possible to find at least one constant value N such that |f(x)| is majorized by N|g(x)| for all the values of x larger than x_0. The latter definition is usually found in books on analysis of algorithms, but there exists a more general definition.

\displaystyle f(x) \in \mathcal{O} \big( g(x) \big)  \text{\footnotesize as } x \rightarrow a  \Leftrightarrow  \exists N,\delta > 0  :  |f(x)| \leq N|g(x)| \, \text{\footnotesize when} \,  0 < |x-a| \leq \delta .

Continue reading