Of the different methods that purport to measure intelligence, the most famous is the IQ (Intelligence Quotient) test, which is a standardized test designed to measure human intelligence as distinct from attainments. Intelligence quotient is an age-related measure of intelligence level. The word quotient means the result of dividing one quantity by another, and one definition of intelligence is mental ability or quickness of mind. Usually, IQ tests consist of a graded series of tasks, each of which has been standardized with a large representative population of individuals in order to establish an average IQ of 100 for each test. It is generally accepted that a person’s mental ability develops at a constant rate until about the age of 13, after which development has been shown to slow down, and beyond the age of 18 little or no improvement is found. When the IQ of a child is measured, the subject attempts an IQ test that has been standardized, with an average score recorded for each age group.
Thus a 10-yearold child who scored the result that would be expected of a 12-year-old would have an IQ of 120, or 12/10 × 100:
Because after the age of 18 little or no improvement is found, adults have to be judged on an IQ test whose average score is 100, and the results graded above and below this norm according to known test scores. Like so many distributions found in nature, the distribution of IQ takes the form of a fairly regular bell curve (see Figure 0.1 below) in which the average score is 100 and similar proportions occur both above and below this norm. There are a number of different types of intelligence tests, for example Cattell, Stanford-Binet and Wechsler, and each has its own different scales of intelligence. The Stanford-Binet is heavily weighted with questions involving verbal abilities and is widely used in the United States.