The Brachistochrone Curve

Let B to be a ball positioned at some point A. Considering that only gravitational force acts on B, what is the trajectory the ball should follow in order to reach a point C in the shortest time? The solution of this problem is known as the brachistochrone curve.

Although similar, the brachistochrone problem is different from the tautochrone problem. The amusing fact is that the solutions of both problems are the same. The brilliant argument exposed next is due to Johann Bernoulli.

The principle of least time

Light has different speeds at different mediums. That’s why we perceive the phenomenon of refraction when observing objects underwater while ourselves stand out of it. Willebrord Snellius was the first to give a relation between the speed of light in different mediums and its angle of refraction. By denoting n_1 and n_2 constants related with the speed of light at mediums M_1 and M_2 (also called refraction indexes of mediums M_1 and M_2); by \theta_1 the entry ray’s angle with the plane P separating M_1 and M_2; and by \theta_2 the out ray’s angle, the relation is written as:

\displaystyle \frac{n_1}{n_2} = \frac{ \sin \theta_1 }{\sin \theta_2}

Continue reading


The Tautochrone Curve

The tautochrone is the curve in which a ball, positioned anywhere on the curve, will take the same time to arrive at its lowest point.

Source: Wikipedia

In a physical model where uniform gravity is the only force acting on the ball, the tautochrone curve is a cycloid (the curve described by a fixed point in a rolling circle), as correctly proven by Huygens in 1673 [BOYDE] using pure geometric arguments. We are going to present two different ways to arrive at this result.

Continue reading

Principles of Fixed Point Algorithms – Theory and Practice

A fixed-point algorithm is easy to explain: Given a fixed-point function f suited to your problem,  choose any point x_0 from dom(f) and compute f(x_0) to obtain the next point x_1. Proceed until find point x_k = f(x_k).

In the following, we expose the conditions in which this strategy works and the reason why. The famous gradient-method can be seen as a fixed-point algorithm.

A simple example

Let’s say you have n pairs of socks that are initially stored in n different drawers. The drawer in which the pair s is stored is given by d(s). Let’s assume that we start with d(s)=s \quad \forall s. Now we wish to change the drawers of each pair s such that d(s) \neq s \quad \forall s.. Is it possible? Sure it is!

Red dots represents d(x)=x

d(s) = \left\{ \begin{array}{ll} s+1 & s < n\\[0.15in] 1 & s=n \end{array}\right.

The function d is a permutation and it does not have any fixed-point, i.e., there exists no s such that d(s) = s. Functions with no fixed-points can be generated similarly for any countable set (See Hilbert’s Hotel). Let’s see what we can do with uncountable sets.

Continue reading

Necessary and Sufficient Conditions for Unrestricted Optimization Problems

In a previous post we have seen the conditions in which extremum values are guaranteed to exist. Nonetheless, there are cases in which the extremum values are there but the Weierstrass-Extremum value theorem didn’t tell us that much. The function x^2 has a global minimum on the interval (-1,1), for example. Further, the function

g(x) = \displaystyle \frac{1}{\sqrt{2 \pi}} e^{-(\frac{x^2}{2})} - \frac{1}{\sqrt{2\pi}}  e^{-(\frac{(x-2)^2}{2})}.

has a global maximum and a global minimum on the open interval (-5,5).


Continue reading