If IQ is a measure of the relative speed and accuracy of an individual’s verbal/ numerical and abstract reasoning under controlled conditions how does it differ from ability testing. Ability tests are stand alone measures of a person’s potential in a specific area (e.g. Verbal, Numerical or Spatial). Typically they are used to assess the ability to learn the skills needed for a new job or to cope with the demands of a training course. There is no widely accepted definition of the difference between ability and aptitude. Aptitude tests tend to be related to very specific jobs and have names that include job titles such as the Programmers Aptitude Series (SHL). Ability tests on the other hand are designed to measure the abilities or mental processes that underlie aptitude. An ability test such as the General Ability Test (GAT) is made up of four tests of specific ability – numerical ability; verbal ability; non-verbal ability and spatial ability. You will find with experience that some tests fall into more than one category and that the distinction between the various categories is not always an easy one to define.
Ability or Aptitude tests are definitely not the same thing as Tests of Attainment. A test of attainment is concerned with your understanding of a curriculum or syllabus. Ability tests are prospective: they focus on what the person is capable of achieving in the future or their potential to learn. School examinations are one example of measures of achievement or attainment, and while we might draw some conclusions about an individual’s ability on the basis of Leaving Certificate or GCSE results we would not use them as a direct measure of ability since a less able student may work harder than a more able student to produce a better score.
The following pages include example items from
(1) The verbal reasoning subtests from an Executive, Graduate and Managerial assessment published by ASE.
(2) The numerical reasoning subtests from an Executive, Graduate and Managerial assessment published by ASE.
(3) The non-Verbal reasoning sub-tests from the Differential Ability Test Series published by ASE
The questions in this test are designed to assess your ability to reason with non- verbal figures or designs. The examples below are designed to ensure you understand what is required in the assessment itself. The examples are untimed however if you are being asked to do the test itself as part of an assessment process you will have 20 minutes to complete as many questions correctly as possible.
Look at the diagram below. There are two figures inside a large oval.Decide how they are alike; which may be in one way or several ways and select the figure at the bottom that also has all these qualities.
Examples 2 & 3
In these questions there is one section missing from a grid, which contains an arrangement of shapes. The missing section is marked by the question mark. You have to decide how the shapes are related to each other and decide which of the six possibilities is the missing one.
Research on the Relevance of Ability Tests
Prior to the 1970’s, many industrial and organizational psychologists believed that selection instruments were situationally specific in that test validity varied not only from job-to-job but also from location-to location (Guion, 1965). The implication was that an organization would have to conduct a separate validity study for each specific situation to insure accuracy in testing. This proved to be difficult and costly, and resulted in a preponderance of small scale research. In such small samples, much of the variation in both test scores and performance measures can be due to idiosyncratic fluctuations in the data (Ghiselli, 1966; Guion, 1965; Lubinski, 1996). By the early 80’s statistical research started to indicate that virtually all of the differences in validity outcomes were produced not by actual differences in the validities of the tests, but by statistical and measurement error brought about because of small sample sizes (Schmidt & Hunter, 1998; Schmidt, Hunter, McKenzie, & Muldrow, 1979). These proponents of meta-analysis used statistics to pool the data across studies, thereby eliminating much of the impact of sampling bias. Results of these studies supported the concept of validity generalization, eliminated much of the need to perform in-house validity studies, and provided evidence to support the application of commercially available selection tests validated on different populations
The “body of knowledge” concerning different types of assessments generated by meta analysis is today considered the most robust data on the relevance of different selection tools. These studies are used to provide evidence of effect sizes and demonstrate consistencies in validities across situations. The table below is based on the a meta analysis of 85 years of research by Schmidt & Hunter and is today used by the US Office of Personnel to summarise research findings on the validities of various assessment tools.