मस्तिष्क: एक प्रकार का कंप्यूटर... The Brain: A Kind Of Computer...

The human brain

Author : Dr. P. D. GUPTA

Former Director Grade Scientist, Centre for Cellular and Molecular Biology, Hyderabad, India

www.daylife.page

The brain   made up of neurons  are the fundamental unit of the brain and nervous system. It's estimated that the brain contains between 86 billion and 100 billion neurons. and   weighs between 1.0 and 1.5 kilograms, which is encased in the skull. The brain is the control unit of the body through the  nervous system and is responsible for many functions, such as, Thoughts, Interpretation, Body movements, Emotions, Making decisions , Discovering new things, Remembering and understanding. . The brain is divided into three main parts: the forebrain, midbrain, and hindbrain: 

Forebrain: The anterior part of the brain, which includes the cerebrum, hypothalamus, and thalamus. It's the main thinking part of the body. Midbrain: A smaller part of the brain located between the forebrain and the hindbrain. It includes the tectum and tegmentum. Hindbrain: The central region of the brain, which includes the cerebellum, medulla, and pons.

The human brain is solving quite difficult computational problems at every moment, just in seeing, recognizing a voice, or moving in a coordinated fashion on four limbs, or two limbs, or two wings. Most of these problems are so complex that they have yet to be formulated in explicit terms by computer scientists, which is why machines that can perceive and move and communicate as animals do—and perform all these functions at once—are still largely the stuff of science fiction. If computers are not really brains, what does it mean to call the brain a kind of computer? Many scientists focuse on computer models of cognition and brain structure, answers this question by pointing to a simple device designed to do one thing optimally, and one thing only: play tic-tac-toe. This "computer," built from electronics and the positions, each with its one optimal response, are encoded as the computer's memory. When presented with a particular position, the computer matches it to one in its subset and produces the correct response. By contrast, a digital computer would meet the challenge with a set of programmed instructions that it would run through recursively at each move to arrive at the optimal response.

The MIT device does not carry out a string of calculations or algorithms, the kind of task we generally think of a computer performing; instead, what it offers is essentially a "look-up table," with the correct answer precomputed and readily available. To obtain swift access to that answer, however, one must present a problem that exactly matches one of the problems originally encoded in the computer's memory. Beyond that pre-encoded set, the computer cannot provide any correct answer—or even a partial answer—unlike a digital computer, which can be reprogrammed for new problems because of its more general mode of operation. Still, within the realm of its pre-encoded problems and responses, the "look-up table" is extremely fast and effective.

This kind of device, however, requires a great deal of memory, since every significant aspect of each pre-encoded problem must be specified if the match is to be accurate. For the game of tic-tac-toe this is manageable; for chess, with its 1040 possible game positions, or for real-life contexts in which the rules are less clear, it is impossible, at least at present. As a practical device, the look-up table is strictly limited. However, the principle of precomputing certain responses and being able to retrieve them with minimal additional effort. True, the abundant memory required by a look-up table was extremely expensive in the first computers and still poses a practical challenge today. But if the amount of memory were tremendously expanded, it would be possible to store many more solutions—in other words, to address many more and different kinds of problems.

It is hardly a revelation at this point that the human brain exhibits just such a tremendous capacity to store information. With somewhere between a hundred billion and a trillion neurons, the human brain already looks fairly impressive—but what really expands its storage capacity far beyond anything we can yet envision on an engineer's drawing board is the brain's proliferation of synapses. Each neuron contains several thousand points at which signals can be transmitted. Even if the brain were to store information at the low average rate of one bit per synapse (in terms comparable to a digital code, the synapse would be either active or inactive), the structure as a whole could still build up vast stores of memory, on the order of 1014 bits. Meanwhile, today's most advanced supercomputers command a memory of about 109 bits. The human brain is memory-rich by comparison.

Of course, organization is crucial to managing such a vast resource, and the brain exhibits this feature at several levels, as discussed throughout this book. Research conducted on the simpler nervous system of invertebrates, as well as on nonhuman primates, other vertebrates, and humans, has indicated how learning brings about structural changes in nerve cells and how the neurons in turn form regions, which take part in networks. The networks are organized into distributed systems, which collaborate with other systems, both sensory and associative, to produce the total working effect.

Memory itself is organized so as to take advantage of these many levels of information: it appears to be arranged along associative paths, by the principle of contiguity. That is, the brain associates bits of information in such a way that we can recall items either on their own or by being "reminded" of them by a cue. The name of an acquaintance may come to mind when needed, or we may search for it under one heading or another: the name sounded like that of another friend, or the person looked like a former co-worker, or the meeting took place at the lunch following a difficult business negotiation. Considering the brain in purely physical terms, researchers have suggested that another form of contiguity may apply as well, that is, the simple proximity that builds up into maps. It may be that neurons close enough to one another to be activated together keep some trace of that contiguity as part of their bit of information.

Just what the memory-forming mechanisms might be, at a physiological level, has long puzzled psychologists as well as neurobiologists. Evidence of several kinds is gathering, however, in support of a model first suggested in 1949 by Donald Hebb,  that a memory forms as a result of at least two kinds of activity taking place at a synapse simultaneously. The activities would have to include both the pre and postsynaptic elements, the neuron transmitting the signal and the one receiving it. Hebb reasoned that the strength of the signal received in the postsynaptic cell would depend on the interaction of many details—the amount of transmitter released the presence or absence of neuromodulators that affect the postsynaptic cell's excitability, the number of receptor sites on the receiving cell, and other such variables. Whatever the specifics, the underlying principle would be that information is stored as a result of two or more biochemical factors coming together in time, at the same instant, and in space, at the same synapse. (The author has his own study and views) ...to be continue...