HOME NEW PHYSICAL THEORY DOWNLOAD  
Chapter 4

There is no 'electric current'

Electricity and magnetism have been known since antiquity as mysterious curiosities of Nature. Until the nineteenth century there is no record of their being applied to practical ends other than for entertainment, puzzled speculation, and, in the case of magnetism, as an aid to navigation. They offered proof of mysterious forces in the natural world that seemed connected with the deeper mysteries of life and living things.

Anyone who plays about with a few magnetized objects quickly realizes that their mysterious force emanates from two opposite points or poles, and that these are complementary: like poles repel, whilst unlike poles attract. The same is soon discovered about electricity. The easiest way to generate it is by friction, and the electric charges generated by rubbing rods of glass and amber with a cloth, for example, can be used to charge two metallic objects with opposite polarities that disappear when they are touched.

Such charges are called static electricity, the only type known until 1800 when Alessandro Volta discovered low voltage current electricity and announced his 'Voltaic pile': the first electric battery. Even a tiny spark of static electricity has a potential of a thousand volts or more, but is only a minuscule quantity: perhaps a millionth of a coulomb. Volta's pile produced a few tens of volts, but did so continuously at rates up to a few coulombs per second: a few amps in modern terminology. The fact of its continuous supply allowed current electricity to be investigated in detail, whereas the instantaneous – and often alarming – flash of static discharges made repetitive, controlled experimentation impossible.

Theorists soon realized that static and current electricity were identical phenomena at different orders of magnitude, but opinions differed as to their constitution. The two dominant schools of thought were the 'single fluid' and 'twin fluid' theories, the former holding that only a single 'electric fluid' existed, and that charged bodies possessed either an excess of this – a 'positive charge' – or a deficit – a 'negative charge'. Two-fluid theory proposed the existence of two distinct fluids with opposite charges.

The truth eventually emerged as a combination of them. Two opposite charges are now known to exist within all atoms; their nuclei are assigned a positive charge, and the cloud of electrons surrounding them a negative. An excess of electrons gives an object an overall negative charge, a deficiency gives a positive, and a flow of electrons from the former to the latter – the flow of 'electric fluid' – neutralizes both.

Elementary science courses generally begin with an example such as a battery powering an electric light, typically a small globe with a thin metal filament heated to incandescence by the passage of a current through it. Measurement and calculation being fundamental to science, students are taught to apply these to such simple examples. The battery may have a potential of 12 volts, the lamp a resistance of 20 ohms, and Ohm's Law is used to calculate a current of 0.6 amps. Multiplying the voltage by the current yields a power of 7.2 watts. So far, so good.

Confusion arises, however, in a very simple matter. Current is said to flow from the battery's positive terminal to its negative. This is necessary in order to give the magnitudes their correct signs, positive or negative. For example, if the lamp is dimmed by inserting a 20 ohm resistor in series with it, the current is halved and the lamp operates a half power. Half of the voltage is dropped across the lamp, half across the resistor. Adding these equals the battery's potential, and Thevenin's Theorem is validated. This appears to be equally satisfactory.

Problems arise when overly curious students question the given explanation. An electric current consists in a flow of electrons, and these move from the battery's negative terminal to its positive, not from positive to negative. What, then, is this 'electric current' that moves from the positive terminal to the negative? Both teachers and textbooks would prefer that this question did not arise, but it must be anticipated and addressed, however summarily. Electrons, it is explained, have a negative charge, and the same calculation can be performed for electron flow by inverting all of the signs to obtain an identical result. The two are mathematically equivalent, so there is no problem.

This explanation is perfectly rational, logically correct, and must therefore be accepted. Nonetheless, it leaves thoughtful students with an uneasy feeling that something is amiss. None are able to put their reservation into words; were they so able, they would object that it constitutes a conceptual error. If asked why, none again would know how to respond. If they did, the response would be, "Why don't electrons have a positive charge? That way, everything still works, but is now conceptually correct." It is likely that no teacher has ever met this challenge, nor does any textbook present it. Instead, students who cannot accept what is taught are criticized for failing to understand simple physics and basic mathematics.

Occurring as it does so early in the study of electricity, this simple, obvious dilemma has caused many students to doubt their own ability to understand physics, to fear the mysteries of mathematics, and eventually to decide against science in favour of other subjects. It has been and remains a significant early deterrent to a clear understanding of Nature's fundamental essentials. Its resolution can only be discovered by returning to the period of the fluid theories, or perhaps a little earlier in order to establish a context for clear understanding.

In 1733 Charles du Fay formalized the two-fluid theory by proposing that electricity comes in two varieties that cancel each other. In his terminology, when a glass rod is rubbed with silk, du Fay said that it was charged with vitreous electricity; conversely, when an amber rod is rubbed with fur, it is charged with resinous electricity. Benjamin Franklin demurred, since he imagined electricity to be an invisible fluid present in all matter. Rubbing insulating surfaces together caused this fluid to flow between them, this being an electric current. In his terminology, when matter contained too little of the fluid it was 'negatively' charged, and when it had an excess it was 'positively' charged. By 1750 he had identified the term 'positive' with vitreous electricity and 'negative' with resinous electricity, but without giving a reason for the choice: it appears to have been purely arbitrary.

Remembering that atoms were still in dispute at the time, and electrons quite unknown, it is readily appreciated that names for the two electrical polarities had no physical meaning and relied on established convention. Du Fay's nomenclature had a physical reference and was valuable for this reason, since it permitted consistency. Franklin was not only an enthusiastic experimenter, but a successful publisher and politician. Established scientists relied for their authority on professional standing, social status and collegial recognition, none of which guaranteed widespread acceptance of purely personal preferences. Franklin relied instead on the printed word, his scientific achievements, and political fame. If serendipity were his guide in his choice of terminology, then it proved false. He got it wrong.

By the time cathode rays were discovered in 1869, Franklin's names for electrical polarity were solidly cemented into the foundations of scientific terminology. Cathode rays were soon shown to be negatively charged by deflection in a magnetic field, but their nature was uncertain: some thought they were charged atoms, others that they were a type of electromagnetic radiation. Thompson succeeded in measuring their mass in 1897 at about two thousand times less than that of a hydrogen atom; they were a stream of unknown particles that were soon named electrons. The foundations of modern electrical theory and the guaranteed confusion of generations of students were irrevocably laid at the same time.

Consider now what might have happened had Franklin's choice been more fortuitous. Nothing of consequence would have happened at the time, nor thereafter until Thompson's famous discovery. From that time forth, however, mathematical expressions and physical theory would have had a far closer and more natural correspondence. This may seem of trivial import to experienced practitioners today, but they will automatically ignore the very significant psychological consequences of the change, and dismiss it as meaningless if so challenged.

The truth is surely otherwise. The first consequence would have been generations of students who found electrical theory as simple and straight-forward as its elementary stages certainly are, and much more comfortable studying both it and its mathematical formalism. A considerable number may thereby have been persuaded to choose science as a career, producing a larger body of more confident practitioners and researchers, with undoubted benefits for the entire field.

A second consequence will be more controversial to some. The study of science requires competence in several abilities, among them that of creating clear mental images in the mind. This is the basis of all true understanding of material phenomena, and is of essential assistance in mastering abstractions. The closer our concepts come to being an accurate reflection of reality, the more confident our understanding and critical our reasoning.

As obvious as this will seem to laymen, many scientists will not only contest, but refute it. Since 1927 it has been held that conceptual understanding of physical phenomena is ultimately impossible; that only mathematical models of physical events are of use in comprehending them, and that conceptual analysis should be discouraged in undergraduate studies, and abandoned thereafter. In practice, this belief is honoured more in the breach than the observance, though never so confessed by those with an eye to their reputations.

A third result is too controversial for discussion here, but concerns the biological aspects of electricity. All living organisms rely on chemical reactions for their functioning, and all chemical interactions involve electrical processes. Higher animals possess nervous systems that control their bodily functions, and these operate directly by means of tiny electric currents. The question as to whether electrical or chemical processes are primary is moot, but it is obvious that electricity is crucial to life. Any improvement in our understanding of electricity must therefore offer deeper insights into biology.

In the ideal case, MWS would have reversed its assignment of electrical polarity following Thompson's discovery. In practice this would have cost so much time, money and confusion that it could never have been considered worthwhile, despite the long-term benefits. At the very least, however, the above explanation should be an essential component of all elementary instruction in electrical theory. That it is not says much about the state of modern education, the characters of those who become scientists, and MWS itself. Those who either question the existence of electric current or have trouble accepting it are deemed unsuitable for training as scientists. Only those who accept the fiction, for whatever reasons, are suitably qualified. Any discipline that favours belief in doctrine and dogma over belief in reality is, by definition, a religion, not a science. It is with good reason that MWS is now regarded by many as the religion of Scientism.

Electric current, too, has been reified, but without even the poor excuse that it is a useful concept. It is merely a fiction used to disguise an historical error, and should never have been invented.