While by no means a perfect class (quite possibly a rather poorly coded class), the following provides some basic functions necessary for working with fractions.

```
n = $num;
$this->d = $den;
}
public function gcf($n1, $n2){
if ($n2>$n1){
$tmp = $n1;
$n1=$n2;
$n2=$tmp;
}
do{
$rem = $n1 % $n2;
$n1 = $n2;
$n2 = $rem;
}while($rem!=0);
return $n1;
}
public function lcm($n1, $n2){
return $n1*($n2/frac::gcf($n1,$n2));
}
public function reduce (){
$g = $this->gcf($this->n,$this->d);
$this->n /= $g;
$this->d /= $g;
}
public function multiply (frac $n1, frac $n2){
$f = new frac($n1->n*$n2->n,$n1->d*$n2->d);
$f->reduce();
return $f;
}
public function divide (frac $n1, frac $n2){
return frac::multiply($n1, new frac($n2->d,$n2->n));
}
public function add (frac $n1, frac $n2){
$g = frac::lcm($n1->d,$n2->d);
$f= new frac($n1->n*($g/$n1->d)+$n2->n*($g/$n2->d),$g);
$f->reduce();
return $f;
}
public function subtract (frac $n1, frac $n2){
return frac::add($n1, new frac(-1*$n2->n,$n2->d));
}
public function display(){
return $this->n . "/" . $this->d;
}
}
?>
```

Examples of use:

1/3 + 1/2:

```
display();
?>
```

1/8 * 2/5

```
display();
?>
```

The `gcf`

function uses Euclid’s algorithm, and the `lcm`

function (used to find the common denominator) calls the `gcf`

function.

Given the significant disparity between the ease with which a computer can ‘learn’ fractions, and the difficulty encountered by most students, perhaps it is time to consider teaching fractions as a series of concrete steps – an algorithm – instead of the current method. (Granted, most current methods do provide a method for arriving at an answer, but especially for the determination of the lowest common denominator (or reducing fractions), a procedural methodology (e.g. prime factoring, Euclid’s method, etc) is rarely given.)

]]>A traditional display has a ratio (width:length) of 4:3, screen size is typically given as the diagonal (or if we are thinking triangles, the hypotenuse).

Let:

d represent the diagonal

w represent the width

h represent the height

A represent the area

We get two equations:

Using the Pythagorean theorem:

1:

Using the ratio of length to width:

2:

Equation 3, comes from substituting 2 into 1, we get:

Equation 4 comes from substituting 3 into 2 we get w, which we will use to find area:

Therefore, the area of the 4:3 display (equation 5) is:

Following the same procedure for the 16:9 display, we get:

2:

3:

4:

5:

The ratio of the area of the 4:3 display to the 16:9 display, therefore, is:

In other words, you get about 12% extra screen area on a traditional, 4:3 display, compared to a widescreen 16:9 display. Not exactly a revelation perhaps, not something that immediately springs to mind. The advantage (of the widescreen), perhaps comes in the form of usable area for an image displayed in widescreen, where the 4:3 display will likely have less usable area.

]]>What exactly is this problem that is inherent in the human race you ask? It is one of the elements key to Darwinian evolution – the fact that traits acquired during the life of an organism are not passed to offspring.

How is this a problem, you ask? Very simply, the amount of knowledge in the world is increasing, while the capacity of the human mind to absorb information has not kept up with this increase in knowledge. 500 years ago, a person could be a master at an entire field, they could accumulate all the knowledge that was known about that field in their lifetime. The significance of this is not the acquisition of data, for surely with the advent of the Internet we can find out information about any topic much faster than ever before. The significance lies in the fact that without existing knowledge, one cannot expand on theories and discover new things. When one knows everything about a field, it is possible to see the connections between elements in a way that technology has yet to replicate. The significance of results is possible to interpret (not necessarily that we can’t interpret results today but rather that a specialist in one field may not attribute a significance to certain results which, if seen by another would have obvious ramifications) and avenues for pursing goals are more diverse.

At the current time, this may not pose a significant problem to us, however the effects can already be seen. Students must learn more material in a shorter time. People are expected to have a higher level of knowledge at a younger age. Individuals must attend school for longer periods of time before they are considered qualified in their field. As the amount of information we acquire increases, people must either spend more time in school or narrow their field of concentration. Already people are generally in their mid to late-twenties before they have acquired sufficient knowledge to add to our knowledge, and this age will undoubtedly increase.

Computers, made by man to serve man, have the advantage over us in this area. Data can be instantly transferred between computers and can be replicated without loss of integrity. Electronic data outlives the machine and can be accessible to all machines. A new computer does not need to learn the data its ancestors had but can acquire it in seconds. If humans cannot attain this ability our obsolescence is unavoidable. We must either be able to learn faster, increase our life spans, or devise a way whereby information can be assimilated into our minds without the necessity of learning. Having information available on an electronic device is great but not sufficient. One neither knows what to look up nor can avail of the information without background.

Should we be unable to conquer this deficiency, we will undoubtedly end up as merely the instruments for executing the suggestions of the machines we created. Surely, man will devise a solution to this dilemma whereby a computer can map similarities in a fashion similar to the human mind. Already, the number of transistors that comprise central processing units is comparable to the number of neurons forming the human brain. Of course, the human brain is much more interconnected, is capable of parallel processing, can rewrite itself, and can be influenced by ‘analog’ elements however at a base level it still accepts the same data – zeros and ones; on and off – the action potential. Once computers can draw connections as our human brain can we will be redundant. With limitless information accessible to a computer, we will turn to computers for analysis and future direction more than ever. This is not to say that the world will be taken over by machines, but merely that we will have willingly ‘outsourced’ many of our higher level functions – those that shape our future – to the machines we have built. From a different perspective, this in and of itself is the solution to the problem. Unlike other organisms, the evolution of man is now more than biological, it is also technological. We can add a new ability simply be devising a machine to do it, something that no other organism can do. Creating machines to think for us is simply another step in our evolution, the use of our biological advantage to overcome our biological deficiencies.

The human brain is the most complex object known to man. Despite numerous advancements, we know very little of its secrets. While our obsolescence is imminent, without intervention, we are faced with many other obstacles that are more likely to endanger our existence before this. Furthermore, I would like to believe that we have the ability to devise a solution this problem and to continue to have a degree of usefulness for much time to come.

]]>I recently encountered an instance where network connectivity had dropped considerably, but appeared to be gradually improving without anyone doing anything to actually address the problem. It was mentioned to me that perhaps the ‘Internet was healing itself’ and I wonder, why not? An organism, when injured will repair itself – but surely damaged hardware cannot do the same. On the other hand, when pathways in the brain are damaged or minor blood vessels obstructed, it is not uncommon for new paths to form over time. The same is true of a network, to some extent. If one path is unavailable, another path (which may not be as efficient) will be used. In the same way, new routes are continually added, and optimized.

While this sort of autonomous repair might seem to be almost biological in nature, many networks, of sufficient complexity do exhibit the ability to heal themselves – to slowly redirect traffic and re-optimize pathways – eventually achieving a new norm, and demonstrating an ability to withstand significant damage.

Given that real scenarios have been modelled with considerable accuracy using complex networks (e.g. MMORPGs), it appears that the combination of the virtual world and its real elements do indeed exhibit behaviour that transcend the inanimate, and while not quite sentient, this behaviour is quite possibly beyond the reactionary. After all, even organisms function in a defined (albeit extremely complex) manner – and consciousness may well be a function of complexity (think ‘Star Trek – Vyger’).

]]>