How smart can we become?

The current interest in individuals’ intellectual abilities was born when a new bourgeois class began to challenge the society into which it was born. When it was no longer ancestry and position that dictated how far one could go in life, without the individual’s own talent, it became more important how this looked. The image of a new, quick-witted, middle class against a sluggish old aristocracy became so popular that it soon became a cliché. It is the one we were amused by when reading PG Woodhouse’s stories of how the brilliant Jeeves always had to help Bertie Wooster and his upper-class fool friends. The actual explosion was not until the late 1800s, when Darwin’s theory of evolution and statistical science gave scientists the tools to speculate. Darwin believed that intelligence was not uniquely human. If you do not have the ability to learn from experience – how to find food, where danger exists – so one cannot survive and spread this inability on. Man was most intelligent and it helped herI’m translating this as it was written, but in English, it does seem odd to use the feminine form when referring to man.I’m translating this as it was written, but in English, it does seem odd to use the feminine form when referring to man. to beat her closest rivals.
It was Darwin’s cousin, the ingenious Sir Francis Galton, who laid the foundations for the modern theories of intelligence through his obsession with measuring the world. His attempt to measure women’s beauty on a three-point scale (the most beautiful living in London, the ugliest in Aberdeen) is well known, as is his ambition to evaluate whether prayer works, which included an examination of whether members of the royal family lived longer because many prayed for their prosperity. Galton was interested in intelligence’s background, perhaps because he was suffering from a severe performance complex. Unlike his cousin, Galton believed that intelligence differed dramatically between different people, and in one paper he mapped the leading British family and thought he could prove that talent was inherited. He also sought more direct measurement of intelligence. In the Hobbesian tradition, he saw intelligence as a form of mental agility, so he developed a series of tests of reaction time. One of these entailed him going out with a stopwatch to measure how many thoughts he could think of one minute. The method now seems ridiculous, but the idea of measuring intelligence came afterwards never to die. During the first year of the 1900s, the French psychologist Alfred Binet developed a successful test that was more focused on solving problems, discovering patterns and to reasoning analytically. Binet saw intelligence as a diverse characteristic and thought it, to a great extent, could be trained up. He was driven by the desire to identify children who had problems in school and give them an education that developed their talents.
Binet’s tests provided the scientific basis for different conclusions than his own. The psychologist Charles Spearman combined Binet’s versatile tests with Galton’s view of intelligence as a kind of ability that is mainly inherited. Using statistical techniques, he analysed the results and found a strong correlation between how an individual can handle various kinds of intellectual problems, especially those that require abstract thinking. He called it ‘general intelligence’, the ‘g-factor’. Spearman did not see it as the ability to learn, but as the ability to think abstractly, which in turn affects learning ability. It was a kind of mental energy, a ‘brain power’ that Spearman explicitly compared with the ‘horsepower’ measure of power. The German psychologist William Stern developed the idea of relating a person’s talent to the age group she belonged to and hence we got the concept of IQ – Intelligence Quotient. In 1916, Stanford psychologist Lewis Terman created the first modern IQ test. His ambition was not only to identify problem children, but to rate the entire population’s intelligence to see who was above and below the average for their age group – above and below 100 IQ points – to help everyone to their proper place in life.
Just one year later, the U.S. military began using the tests on a massive scale. The entire 1.75 million recruits’ IQ was tested during the First World War. Those who received an ‘A’ would be given officer training, whereas they would not trust that those who received a ‘D’ or ‘E’ could read or understand written instructions. The army tests quickly made IQ a widely known concept in the United States. The search for g-factor soon came to overshadow all other attempts to assess people and their capabilities. Authorities and companies began using intelligence tests to get the right man in the right place. After World War II there also came standardised tests that could be evaluated automatically, without the presence of trained staff, and the measurement industry reaped the considerable rewards. Rarely has such a new method had such a quick impact on our view of society and each other.
The IQ tests could be used for meritocratic purposes. If the environment governs all things, this means that parents pass on their financial and intellectual capital to their children, but if intelligence is partly innate, talent may exist in the most unexpected places, and social mobility is possible. In many cases the tests were used to find the talents that were hampered in their development due to poverty and poor schools. In his book on education and psychology in the UK, Measuring the Mind, Adrian Wooldridge documents how IQ tests after World War II opened the elite British schools to larger segments of the population. An old Cambridge teacher reacted bitterly when he heard that a student was interested in IQ tests: ”Ha. This is something Jews have invented for the benefit of Jews.” Parallel to the tests becoming public domain, however, grew a stronger reaction, and not just from the old elite. Reluctance to put a figure on a person’s talent was as old as the idea that one can do it. ”I hate the audacity to claim that one in 50 minutes can judge and classify one’s predestined fitness in life,” wrote author and journalist Walter Lippman in1923. Such reluctance was reinforced by intelligence tests, which in many cases were used to justify elitism and discrimination, which is another aspect of the research that existed from the beginning.
One of Francis Galton’s motivations when he sought to measure intelligence was the fear that the world would be filled with idiots – that was not an insult, but the scientific term for a person with an IQ below 30. The less talented had more children and because modern society had effectively reduced infant death, hunger and disease, these children now survived and could have children of their own. They were on the way to supplanting good characteristics. It was Galton who coined the term eugenics (from the Greek: good birth) and wanted to find ways to get the intelligent to marry and procreate, so a better race could be created. Spearman was even more hard and suggested that those who did not score highly enough in a ‘g-index’ should be deprived of the right to vote and reproduce. Terman believed that blacks and immigrants from Eastern and Southern Europe had lower intelligence than the original immigrants from the Nordic, intelligent race. The early researchers on intelligence saw a greater role for women than society wanted to give them, but they came to consolidate and legitimise other forms of stratification, especially in America where many whites felt concern over mass immigration and the large black population. ”How can you have anything like social equality with these large differences in mental capacity?” asked one of the pioneers.
The results of the army tests raised concerns throughout the United States. They suggested that the recruits had a much lower level of intelligence than scientists had imagined when the most tested students Doesn’t really make sense here.Doesn’t really make sense here.- and it was diminishing rapidly. Many thought it was because blacks and non-Nordic immigrants mingled with the populationTranslation is good, but not really sure what this means.Translation is good, but not really sure what this means., then they, according to the tests, had a shockingly low intelligence. The average person in many countries is an imbecile, said several psychologists. The early tests were limited and even those who would only measure abstract intelligence often required some sort of general knowledge and understanding of language. Psychologists could explain the poor performance of the immigrants in that they had not mastered the language, that blacks had very limited schooling and that such feelings of inferiority led to lower motivation and greater nervousness. But they read the results from his belief in genetic differences between races and, instead of seeing the potential, they saw the threat to the nation. Campaigns to limit immigration revived, and in 1924 the USA imposed a new immigration law with ethnic quotas, which reduced or stopped immigration from countries with less promising populations.
In the early 1900s, eugenics was, in some form, a fashionable idea among intellectuals in the West. Opposition came only from conservatives who saw it as a crime against creation, liberals who saw it as an interference in the personal sphere and socialists who had solidarity with the working class that would be affected, but for a long time these were in the minority within their own ideologies. At one time, eugenics was considered as a universal remedy, the policy tool that would solve all of society’s problems – poverty, crime, unemployment, and prostitution. Only if we had intelligent children would they live good and happy lives. Even the progressives were frightened by the threat of the imbecile, even if they less often accepted the idea of sharp IQ differences between races. As Adrian Wooldridge observes, it was the intellectual left in the UK that was excited about eugenics, and the leading Fabian, Sidney Webb, noted gleefully that this was an issue in which laissez-faire liberals were at a loss because all eugenics was based on the idea of the state to ”intervene, interfere, intervene”.
John Maynard Keynes was one of the leaders of the Eugenics Society, 1937-1944, George Bernard Shaw wanted to ”socialise human reproduction” and the British welfare state’s father, William Beveridge, found that those who were mentally defective should lose their right to vote and having children. Bertrand Russell was, as usual, most inventive. He suggested that the state would hand out tickets for reproduction in different colours, so that people were paired with those who were just as smart. Those who had children with someone who had a different colour on their ticket would be fined. Here in Sweden, the Social Democratic thinkers Gunnar and Alva Myrdal, were no less worried. In The Crisis in the Population Question, they estimated that society’s and the workplace’s increasing demands for knowledge and efficiency meant that between 10 to 20 percent could no longer manage their lives. Interestingly, they were less concerned about physical and mental illnesses than the ”mild idiot reproductive freedom”, which was assumed to lead to asocial behaviour, welfare dependency and crime. Mr and Mrs Myrdal therefore advocated ”a fairly ruthless sterilisation procedure” and especially ”the right of the community agencies to, even against their will, sterilise the legally capable”. In 1934, we got a uniquely comprehensive sterilisation law for democracies, which led to around 63,000 Swedes being sterilised, more than 20,000 against their will. Alva Myrdal expressed early concern about the low number.
As late as 1946, Keynes spoke of eugenics as ”the most important, most meaningful, and I would add, the most genuine branch of sociology.” But now he was alone in this view. After the world had seen National Socialism’s bestial use of ideas about heredity and eugenics, they fell into disrepute and even the intelligence research with which it had been associated. Soon the U.S. ended its official racial segregation, and country after country allowed for more open education systems more educational and class trips. As the science of intelligence was misused in order to consolidate the differences, the whole science was now thrown out.
Behaviourism’s popularity contributed to a growing disinterest in our interior. This psychological school, devoted itself to our actions and how they are affected by external circumstances. How we are, congenital conditions, and differences between people were not interesting. The theory was perfectly compatible with the 1960s’ craze for equality. The emerging egalitarianism, which in its radical vintage might recall Lysenkoism, arguing that anyone could be anything, that it was mainly the environment that made one elite, and the other a bum. It is society’s fault (or virtue). Theories which until recently were almost seen as beliefs suddenly felt incredibly controversial. In 1969, Berkeley psychologist, Arthur Jensen, wrote an article about the new school reforms that had not lived up to expectations because they were aimed at students who had lower IQs and he considered that it was largely hereditary. Jensen was reviled, threatened, and was greeted by a veritable storm of protest every time he tried to lecture anywhere. ”The whole thing reminded one rather of religious persecution than a scientific exchange,” writes Nils Uddenberg in his book The Soul’s Shamans.
In 1971, the U.S. Supreme Court prohibited employers from using IQ tests if they had not been clearly linked to the position to be filled. The teachers’ union urged schools to stop all tests, because minorities had worse results. In 1981, the palaeontologist Stephen Jay Gould’s acclaimed book The Mismeasure of Man, attacked the idea that intelligence is a meaningful concept that can be measured, and said that those who are still trying to do it, do so because they want to show that minorities deserve their inferior position. It showed how far the pendulum had swung. It is also a prime historical example of how science and social climate can go out of sync. During the same period, the research on intelligence developed rapidly. Around the turn of the century there was little empirical evidence, but the researchers nonetheless drew sophisticated conclusions. But while prominent intellectuals did away with their ideas about the g-factor, inheritance and differences, the tests became more scientific and the body of knowledge greater – and it pointed in a different direction.
Populärt
Amnesty har blivit en aktivistklubb
Den tidigare så ansedda människorättsorganisationen har övergett sina ideal och ideologiserats, skriver Bengt G Nilsson.
In 1994, a bomb hit the debate, the book The Bell Curve, by psychologist Richard Herrnstein and social scientist Charles Murray. It argued in its 800 pages, with lots of references, yet remaining easily accessible, that there is a g-factor, that it can largely (40-80 percent) can be explained in terms of genetics, can be detected using IQ tests and that it is very important for what happens to a person in life. Most controversially, it addresses average differences between ethnic groups (white Americans have a higher IQ than blacks, but lower than those of Asian origin) and even argues that these were due partially to genetic causes. The book sold in huge editions, and was violently debated and reviled. What is interesting, however, was not the book’s controversial statements about race and IQ, but everything that was not really controversial in it. That which probably made the biggest impression on readers was that so much of the information that violated everyday perceptions proved to be scientifically public domain. In December 1994, signed by 52 intelligence researchers, a piece was published entitled Mainstream public opinion on science in the Wall Street Journal, a document that explained the research behind many of the allegations made in The Bell Curve.
The controversy got the huge organisation of American psychologists, American Psychological Association (APA), to initiate an inquiry about IQ. APA said that the debate on Herrnstein and Murray’s book had caused so many misunderstandings that it was time to, once and for all, define what science knows and does not know. A commission of eleven respected scientists who represented the broad range of expertise and views was appointed, and reported unanimously in August 1995. The group noted that there was no evidence that ethnic differences in intelligence have genetic causes, but otherwise their conclusions were close to The Bell Curve’s authors’: there is a relatively stable g-factor, it can be measured with intelligence tests and these show that individuals differ from one another and that it is relevant to how the average fares in training and professional life. The 1900s ended with the classic intelligence research being restored, including the importance of genes and difference. But fluctuations in the debate give us reason to believe that it will continue to evolve and maybe we are headed for a picture of a dynamic intelligence, subject to constant change. The evidence is now indisputable that IQ levels are not constant, but have grown quite rapidly in almost all groups over the last century. Whether this ‘Flynn effect’ is because we have become more intelligent, or that more and more of us live in an information society, in which we constantly train ourselves to think abstractly, it raises the question of why we should dramatise the differences between the groups. They may simply reflect the false start some got.
Genetics has taught us that the interaction between genes, and between them and our environment is a bit more complicated than we previously thought. All believe that intelligence to some extent has a genetic cause, but no one believes anymore that we will find a gene for intelligence. Modern brain research has also shown that the brain is really not a static machine. Each day brings new brain cells and every microsecond the brain’s network changes because of what we think, feel and experience. The parts of the brain that we use a lot grow and the new brain cells that we do not use, we lose. Then there is the technological revolution that could reach our brain. Soon, we may well have minute microprocessors in the brain that enhance our memory, nano-sized robots that diagnose and repair brain damage, and drugs that improve brain function. And our minds will naturally be wirelessly connected to the network. The central question is no longer how intelligent we are, but how intelligent we can be. Not even with our ingenious minds can we predict where this will lead, but it is conceivable that future understanding of the brain will not be derived from industrial machines or the computers of the information society, but from the modern, bustling city, with frequent contacts and cooperation, where we meet new visitors in unfamiliar neighborhoods, and where we are constantly tearing down, renovating and building new.