An Actuarial System of Effective Regime & Sentence Management

David Longley, former Principal Psychologist, HM Prison Service

David Longley's paper, An Actuarial System of Effective Regime & Sentence Management, offers many interesting insights such as the following:

"The majority of staff employed by the Prison Service are already performing tasks which could be classed as work in behaviour management, but all too many confuse behavioural measures with psychological factors, or, alternatively, equate the word 'behaviour' with a limited class of actions, usually with a social consequence. In reality, skills with vocabulary, grammar, counting, and all other skills taught by instructors and teachers are behaviours, and can be recorded as skills. Behaviour, in this sense, is no more or less than observable, recordable action or performance. What is required is a professional service in analysis of such performance using the same quantitative technology brought to bear in other areas of physical science and technology. If psychologists limited themselves to helping other staff to record and analyse measures of behaviour as functions of the regime in which they occur, the Prison Service would have an effective science and technology of behaviour along with a clear framework for both recruitment and staff training of such professionals."

An Actuarial System of Effective Regime & Sentence Management

David Longley, former Principal Psychologist, HM Prison Service

The following paper comprises four elements.

The first is an extract from HM Inspectorate of Prisons "Thematic Review", November 1997 which recommends a wider implementation of the actuarially based system of Sentence Management throughout the estate.

The second provides an Executive Summary of the larger project (of which the Sentence Management System is a substantial part). This is referred to as the 1994 system within the above report.

The third provides an edited extract from the original Sentence Management paper (see regimes.pdf for the full paper).

The fourth provides an academic context which reviews some diverse themes drawn from a number of areas of research within behavioural science which the current author believes lends some support to the assertion that in the interests of effective regime and inmate management the Prison Service should look to the implementation of a properly resourced, actuarially based system of skill assessment and programming

Extract 1

Young Prisoners: A Thematic Review by HM Chief Inspector of Prisons for England and Wales

4. Least Harm - Most Gain

"The majority of establishments holding children and young adults have been forced to operate as human warehouses rather than reforming institutions."

Introduction

4.01 At one time I considered publishing this chapter as a separate volume, because there is so much to be said about the principles and practice which should characterise the approach to children and young adults in custody. In the event I decided on making a summary only, setting out the Agenda for the person I hope will be appointed to action, not just process, my recommendations. This follows visits which I and my team have made to every Prison Service establishment (41 in all) holding young people between the ages of 15 and 21.

4.75 Sentence plans (and custody plans for unconvicted prisoners, in the exceptional circumstances where these existed) tended to show no clear links between targets set, or achieved, and incentives and earned privileges schemes, where these were in operation. Incentives schemes tended to stand or fall on the availability of real incentives, and, while some establishments had shown considerable imagination in developing rewards, in other places there were very few real incentives.

4.76 The lack of coherent linking between incentives schemes and sentence planning is symptomatic of a more general lack of integration; for example we found no consistent evidence of links between sentence plans and pre release programmes. The notion of throughcare starting at entry into custody and continuing through imprisonment and into the community as a "seamless process" is far from a reality. Yet the framework for integration exists in the information management system designed in 1994 in the then Directorate of Inmate Programmes of the Prison Service. My team were delighted to see that at least a small number of establishments had adopted a data collection process and were adapting sentence management questionnaire forms developed at HMP Garth. By making staff focus their attention on these questions, the system steers even those who were unaccustomed to interacting with prisoners were steered into closer contact with them. Ultimately, sentence management stands or falls on this interaction between staff and young people in custody. All establishments holding young offenders should introduce the sentence management scheme developed at HMP Garth.

Programmes for Tackling Offending Behaviour

4.77 I noted a trend towards replacing some traditional activities with offending behaviour programmes, for example, reducing education courses and substituting drug awareness or cognitive skills courses. However, I also noted the wide gap between the need for offending behaviour programmes and their provision. What is more important is that my team found little evidence of offending behaviour programmes specifically designed for young offenders, adopted, applied and evaluated throughout the system. There were examples of good, locally grown programmes, and good examples of adoption of centrally designed programmes, but no norm. Some locally developed offending behaviour programmes appeared to suffer from a lack of training on the part of those devising and delivering them. Much can be accomplished by commitment and enthusiasm, but there needs to be a balance between energy and expertise, as inappropriate programmes can damage the recipient. A few programmes devised locally had been accredited but more often, they had not. This suggests a natural responsibility for a Director to ensure consistency throughout the system and with after-care in the community.

4.78 Many young offender institutions initially involved specialist departments in setting up and providing offending behaviour programmes, but cost cutting has meant, in some cases, a reduction in their input. There is every reason why suitably selected and trained Prison Officers should be involved in delivering offending behaviour programmes; the best of them do so already in their daily interaction with young people in custody, which amounts to informal offending behaviour work. It is however, a sign of poor management, as well as being demotivating and unfair to staff to assign to them the major role in delivering formal offending behaviour programmes without equipping them with the necessary resources (principally time, training and support) to enable them to do so effectively Some of the best examples we found of offending behaviour programmes, as in the adult system, involved a productive partnership between several departments (Prison Officers, Probation Officers, education staff and psychologists).

A Lack Of Direction By The Prison Service

4.83 This summary of the different stages that young people in custody experience underscores the lack of a coherent system for them. They are scattered across the prison estate: the Prison Service is struggling to cope with dramatically rising numbers; and there is no concerted attempt at needs assessment and provision either for children or young adults. The majority of establishments holding children and young adults have been forced to operate more like human warehouses, than reforming institutions. Despite this I found some outstanding examples of good practice, but these exist in isolation, and largely unsupported by the system. My team were told continually of the understandable frustrations felt by those working with this very difficult and demanding group of young people, without adequate recognition, and frequently with inadequate resources. The Prison Service has not helped itself by failing to make someone accountable and responsible, not just for designing what should happen to young prisoners, but for overseeing the consistent delivery of what is done, wherever they are held. Young people are a distinct group with distinct needs. These are not addressed consistently at present, which suggests that current arrangements are inadequate. It is depressing to find that many Governors of establishments holding young prisoners recognise this, yet the action taken by the Prisons Board has not appeared to do so adequately I have contended throughout this study that the needs of young people in custody are different from other prisoners. I hope, therefore, that the improvements contained in the Prison Service Review that is shortly to be published will include the appointment of someone accountable and responsible solely for young prisoners, who will have appropriate authority to deliver consistently high quality regimes across the estate.

Extract 2 - 1991,1994 Sentence Management System

Delinquency, simply construed, is a failure to co-operate with the some social requirements. As an alternative to a purely custodial model, the following outlines a positive incentive approach to structuring time in custody. It is designed to map on to all elements of inmate programmes, providing a systematic way of collating and managing progress as reported by experts, which is analysed objectively to produce reports based on actual behaviour rather than casual judgement. It is, by design, a system which will allow management of behaviour to be based on individual merit and performance.

Regime & Sentence Management as a POSITIVE Behaviour Management System

Introduction

Research between 1989 and 1991 led to the conclusion that Sentence Planning will require such a fundamental, systematic, and nationally implemented information base and that this can most efficiently be derived from the management of inmate activities throughout the estate. According to this view, Sentence Planning needs to be supported by a system of 'Sentence Management' which focuses on the structure and functions of available and potential inmate activities.

In this way, Sentence Planning would be integrated with the Regime Monitoring System, effectively developing within the framework of 'accountable regime;. This implies that the most effective way to launch Sentence Planning is not as an additional task grafted onto the regime, but as a natural development and improvement of inmate review and reporting practices.

The system specified below is efficient and cost-effective with the potential infrastructure to support and integrate several initiatives which have begun since the re-organisation. Although not covered in this note, two of the most significant are Prisoners Pay, and The Place of Work in the Regime.

In broad outline, what is proposed has much in common with the Department of Education and Science's 1984 initiative Records of Achievement and has the benefit of using this nationally implemented programme in behaviour assessment as a source of best practice. Whilst the initiative outlined below is an independent development which took its cue from recommendations published in the 1984 HMSO CRC Report, from which the PROBE (PROfiling Behaviour) project developed, results of R&D work over the past 6 years are reassuringly compatible with the work done throughout the English education system during the same period. In this context, what is outlined below focuses on what the Department of Education and Science referred to as Formative Profiling (continuous assessment and interactive profiling involving the inmate throughout his career) rather than Summative Profiling (which provides a review somewhat akin to the parole review, or more locally, Long Term Reviews). In all that follows, the recommendations of the 1984 HMSO CRC Report are seen to be integrally related.

Broad Outline

The system, for national implementation, across all sentence groups can be specified as a 5 step cycle:

1.

Inmates are observed under natural conditions of activities.

2.

Observed behaviour is rated and recorded (continuous assessment).

3.

Profiles of behaviour become the focus for interview dialogues/contracts.

4.

Inmates are set targets based on the behaviour ratings/observations.

5.

Elements of problem behaviour are addressed by apposite allocation.

Some immediate comments follow.

With little intrusion into the running of Inmate Activities, behaviour which is central to these activities can be monitored and recorded more directly to identify levels of inmate competence across the range of activities. The records of competence would guide the setting and auditing of individual targets.

Targets will be identified within the Activity Areas supported by the regime. This requires continuous assessment of inmates within activities, and the setting of targets based on a set of possible attainments drawn from those activities. Such attainment profiles would serve to identify and audit targets and would enable allocation staff to judge the general standard of attainment within and across activities, thereby enhancing both target-setting and auditing.

The frequency of behaviour assessment within activities and routines, and the auditing of the whole process must be driven by what is practicable. The system requires assessment of attainment to be undertaken weekly and according to an explicit timetable, in order to ensure standardisation. Targets set are to be based on observations of behaviour which are already fundamental to the running of activities and routines, and the progress in achieving targets will be discussed with the inmate, guiding allocation to activities within and between prisons. These steps are in accordance with the policy guidelines. Whilst the targets set will be individual, and when collated will comprise a set of short and long term objectives defining the 'Sentence Plan', they will fall into some broad areas (social behaviour, health, performance at work, and so on).

By making more systematic use of the information which is already being used to select, deselect and manage inmates within activities and with respect to routines, Sentence Planning will become a natural co-ordinating feature of the prison's regime.

Specific programmes for problem behaviour (e.g. sex offenders) can be seen as particular inmate activities with their own, more intensive assessment, activity and target setting procedures explicitly designed to address problem behaviour. Development of, and allocation to such programmes will be integrated with other activities. These programmes are seen as both drawing on and informing 'Risk Assessment'.

Specific Details

Fundamental to the system outlined above is the fact that classes of behaviour (as opposed to properties of inmates) are taken as the basic data. These classes of behaviour are demanded by activities and routines, and should serve as basic data for Regime Monitoring.

Observations of inmate behaviour are observations of an inmate's level of attainment with respect to characteristics that staff responsible for the activities have specified in advance as essential to the task.

Activities and routines have a structure quite independent of the particular inmates who are subject to the demands of activities and routines. Perhaps the defining feature of Sentence Management is that it comprises a process of objective continuous assessment, where what are assessed are levels of attainment with respect to pre-set aims and objectives, themselves defining activities and routines. Since the focus is on classes of behaviour rather than attributes of inmates, all of the assessments are of progress with respect to pre-determined classes of behaviour which are requirements of activities and routines.

RM-1 Attainment Areas

Each activity area can be specified in terms of classes of behaviour which the activity requires. These classes of behaviour are basic skill areas which are fundamental to the nature of the activity, which in combination account for activities being distinguishable from each other. These basic skill areas will be referred to as Attainment Areas. They need to be carefully selected as they will be taken to be the defining features of the activity. From this point of view, any part of the daily routines should be specifiable in these terms, and staff should be encouraged to think about how best their area of inmate supervision could be so sub-classified. Whilst the identification of Attainment Areas may, at first glance seem a demanding or unfamiliar task, it is soon appreciated that the identification of Attainment Areas is in fact a pre-requisite to the establishment of any activity in prison, be it an education course, industrial activity or simple housework.

RM-1 Attainment Criteria

Each Attainment Area can be further classified into up to five levels of attainment. These are levels of the same skill, progressing from a low level of competence to a high level of competence. These must be described in a series of direct statements, specifying particular skills of graded sophistication which can be observed, and checked as having been observed. Levels of competence are therefore NOT to be specified as a scale from LOW to HIGH, but rather as a series of specific, and observable behavioural predicates. These are the Attainment Criteria (or Occasion Sentences) of an activity or routine. Just as Attainment Areas are naturally identified by staff who design activities, so too are Attainment Criteria natural pre-requisites for day to day supervision.

Competence Checklists (SM-1s)

For each set of Attainment Areas the Attainment Criteria comprises a COMPETENCE CHECKLIST, against which performance can be monitored. Competence Checklists are referred to within the system as SM-1s.

Record of Targets (SM-2s)

Targets are identified using a second form, referred to as SM-2. Targets will generally be identified from the profile of Attainment Criteria within Activities, (Competence Checklists being completed on a weekly basis provide a record of progress). But Targets may also be identified outside of standard activities, based on an analysis of what is available within the Regime Digest, or Directory which will be a natural product of the process of defining Attainment Areas and Attainment Criteria, and the printing of the Competence Checklists

The two forms, ATTAINMENTS (SM-1) and RECORD OF TARGETS (SM-2) comprise the building blocks of the system. These forms are now available as final drafts (and will incidentally be machine readable). Both forms are designed to be stored in the third element of the system, the inmate's Sentence Management Dossier. This is simply a 'pocket file' to hold the sets of the two forms, and the proposal is that the Head of Inmate Activities and his staff be responsible for maintaining the system.

Through an analysis of the SM-1s both within and across activity areas, Heads of Inmate Activities would have a better picture of the structure of the activities, and of the relative progress of inmates within activities. With inmates actively involved in the process of target negotiation, and with the system being objective, problems of confidentiality so characteristic of subjective reports, would become substantially reduced. Whilst the system can run as a paper system, once computerised, the data collected via SM-1s and SM-2s will form the basis of automated reports.

In terms of paperwork, this is not a demanding task, and in capitalising on what is already done at Reporting Points (where daily logs are maintained already) it promises to be an efficient and accurate way of collecting the required data. For a Reporting Point with 15 inmates, the system would require 15 SM-1s to be completed and returned to the Head of Inmate Activities each week . The design of the system enables data to be processed automatically, converted to computer storable data, thereby making the whole system easier to manage and audit and actuarially analyse.

Fundamental to the design of the SM-1 is the fact that the Attainment Criteria are generated by staff who will be using them, each SM-1 being tied specifically to an activity. The content of the form is 'user definable'. More than one SM-1 form will be completed per inmate per week since the inmate will be assessed at more than one Reporting Point. To record behaviour in daytime activities and domestically on the wings, one SM-1 would be completed each week as a record of attainment at the allocated work/education Reporting Point, and another on the wings, the latter providing an assessment of the inmate's level of co-operation/contribution to the general running of the routines.

The focus is at a more fundamental level of the regime - the recording of attainment levels of individual inmates - with the regime data being logically compiled or deduced from those individual measures of attainment. In defining Attainment Areas and Attainment Criteria by staff supervising the Reporting Point, RM-1s, SM-1s and SM-2s would allow staff to define the nature and objectives of all Reporting Points, storing them within the proposed Sentence Management System to serve as the basic data for any subsequent computer profiling of the inmate's progress as well as serving as the basic material for a local and national directory or digest of activities and their curricula.

Costs and Benefits

The major costs are those required to professionally staff units to maintain the system and use the data to support other staffing the effective, actuarialy based management of behaviour and regimes. A significant benefit is in the potential for automatic machine-generated reports of inmate progress. These could save many thousands of officer-hours. The practicality of such reports has been demonstrated over the years that the system has been piloted.

Coverage of Non-Standard Inmate Activities

The SM-1 form is designed to allow all staff to formally assess any programme of activity in a standard manner (ie, marking whether behaviour in the activity matches the attainment criteria on the Competence Checklist). This form has provision to record a Checklist Code, along with the activity and reporting point identifier. This Checklist Code will allow more than one checklist to be generated for each Reporting Point if the extent or modular nature of the activity requires multiple checklists for comprehensive assessment of the skills which the activity offers.

Similarly, the SM-2 form allows targets to be identified by staff both within an activity, or from a knowledge of what the regime has on offer. The Head of Inmate Activities, in building a library of Attainment Areas and Attainment Criteria, (the Regime Digest, or Directory) will be able to provide interested staff, such as Review boards, with a digest of what activities are available and how they are broken down by attainment areas and criteria.

In this way, short duration intervention programmes can be included in the 'Sentence Management Dossier' in the same way as are the more formal activities. Formal activities are so regarded because they tend to occupy large groups of inmates in activities which are basically structured to have inmates participate for a relatively fixed period (8 weeks to several years).

Using this form of assessment, the staff wishing to run ad hoc programmes, occupying either small groups or single inmates in short modules would be tasked with defining Attainment Areas and Attainment Criteria as a sine qua non for running the proposed programme, submitting the proposal that it be considered as an element of the regime.

The fact that each SM-1 has an attendance register will permit the system to capture the extent of all activity throughout the regime, thereby contributing to a more comprehensive profile of activity within each establishment and the estate in general. Effective regime management would more clearly become one of co-ordinating Attainment Areas to bring about a balanced and appropriately monitored regime, and the data would serve as a sound information base from which staff could build Sentence Plans.

Extract 3

A System Specification For Profiling Behaviour

PROBE

An Executive Summary

D Longley, Principal Psychologist, October 1994

These 12 volumes present a computerised system for monitoring and managing inmate behaviour within the English Prison Service. Developed primarily in the Maximum Security Long Term Prison Estate between 1987 and 1994, the system is designed to provide managers with comprehensive information on the behaviour of inmates in response to the day to day demands of residential routines and day time activities. Although long-term prediction of complex dynamic systems can be extremely difficult, recording what occurs over time can provide reliable measures of differential change. The major products of the PROBE system are computer generated reports on the behaviour of individual inmates throughout sentence, showing their relative performance on a monthly level as well as trends over longer periods. The PROBE system also provides profiles of behaviour at the wing and prison level, thereby allowing managers to monitor long term changes in the average behaviour of their population by wing or even landing. Comprehensive, and standardised, behaviour profiling is therefore available from the individual inmate level over time, all the way up to comparative profiles between whole establishments, sub-establishments and wings, over time. All such profiles can be generated automatically by selecting an option from a local menu, or by downloading a file from a central computer.

'A System Specification for Profiling Behaviour' in 12 volumes, begins with a series of overviews of PROBE written by various field psychologists, along with an introduction to the academic background which lies behind the system.

Volume 1 provides a survey of some of the key areas in behaviour science bearing on the system. Although not comprehensive, what is not covered explicitly is covered implicitly. The main issues covered include the relative merits of adopting an entirely behavioural or third person approach to managing inmates rather than attempting to work to include an inmate's point of view. In other words, the PROBE system adopts a standard, scientific approach which is based on an axiom of the predicate calculus which seems to be inapplicable within psychological contexts. This is a central theme which is elaborated in the first three volumes. This stance is further developed by providing a review of the current state of research on the use of actuarial (statistical) judgement rather than clinical, or personal (intuitive) judgement. The case is made that the latter can only ever be an approximation of the former at best. Most managers today are too busy to be able to make optimal decisions without the support of Information Technology and this is likely to continue to be so. The case is made that managers must therefore make greater use of such systems, and that behaviour scientists must invest more time in their production. This leads naturally on to a consideration of the appropriate technology to support the actuarial stance, and to a critical evaluation of current practices adopted by psychologists in their design and evaluation methods (largely attempts at simple factorial designs which test that differences between groups are unlikely to be explained on the basis of chance alone). The case is presented that this is possibly one of the worst things that ever happened in psychology, and that all too few psychologists (less than one in 20) appreciate the weakness of such an approach. The volume makes the case that a more descriptive approach to data collection and research must be adopted now that systems are available to profile entire populations, and that such data should be functionally analyzed using simple regression technology with an aim to establishing and testing point predictions, and improving on those predictions as is done in the rest of physical science. The volume continues with a brief, but comprehensive survey of the literature on 'What Works' in the way of programmes for prisoners, and concludes with a brief presentation of a computerised system for managing, monitoring, and assessing inmates' participation in activities throughout sentence in the interests of effective inmate sentence management, planning and throughcare.

Volume 2 provides a report on a pilot of the PROBE Sentence Management system introduced in volume 1. Conceived in 1991 and 1992 as a flexible, user defined behaviour assessment system, and developed between 1992 and 1993 when it ran at HMP Parkhurst and HMP Frankland, the pilot study was overseen by a DIP Steering Group commissioned by the Director of Inmate Programmes during a DIP Senior Management Seminar held at Newbold Revel in March 1992 (DIP Research Report No. 2, November 1992). The pilot was completed in January 1994, and volume 2 serves as an empirical illustration of how the prototype system ran in an applied context. The volume includes reports from the Head of Inmate Activities at HMP Parkhurst and the psychologist who oversaw the pilot at that prison. The volume illustrates how comprehensive management information can be provided in simple descriptive graphical form as box-plots which show the distribution of behaviour on the landings and within activities, readily identifying inmates with scores at the upper and lower ends of the scale. The volume also shows how individual and group based reports of inmate performance and attendance can be automatically generated for Heads of Residence and Heads of Activities. All attainment measures are functionally related to measures of control, and it is shown how the PROBE Sentence Management System can be used to facilitate the maintenance of control through effective, and positive Sentence Planning by providing an infrastructure within which individual inmate targets can be identified, negotiated, contracted and subsequently monitored by the first-line staff who have the most contact time with inmates. It is emphasised that it is those staff who are responsible for directly training and supervising inmates within specific domains of expertise, and for want of an adequate technology, such staff's observations and assessments often go totally unrecorded. The volume illustrates how the technology of Sentence Management can be used to make effective use of such staff's professional assessment skills in the interest of recording and shaping positive behaviour change throughout sentence, allowing decisions to be subsequently made on the basis of differential levels of attainment. The case that is made that since it is here that the Prison Service invests in maximally, it is here that our technology for monitoring and recording behaviour change must be focused. Collation, standardisation and presentation of the information recorded can be undertaken by computer. Quality control lies largely in the hands of higher management.

Volume 3 provides a functional specification of the entire PROBE system. Part one covers the logic and technology of relational database technology, showing how this relatively recent technology supports the application of behaviour science in an applied setting, and how such a system can serve well as a Management Information System. Section 1 also outlines the main elements of the PROBE system, explaining how the communications network functions to support the entire system and the staff maintaining it. Section two provides graphical illustrations of how the system has been used in support of maintaining control within the Adult Long Term estate, how routine profiles of inmate movements, disciplinary offending and segregation histories can be generated from local menus, and how profiles at the wing and establishment levels can be readily produced in support of operations. Section three shows how the system can be used as a support system for F2054 Sentence Planning, drawing on monthly Sentence Management data to identify suitable Sentence Plan targets. Thus, whilst section two outlines the technology of PROfiling BEhaviour, ie measuring and describing behaviour, section three provides a technology for PROgramming BEhaviour, ie providing a means of effectively managing inmate behaviour under the rubric of Sentence Planning. This technology provides managers for the first time with a system which enables them to effectively manage or programme inmate activities at the individual and regime level.

Volume 4 provides a detailed description, at the computational level, of the programming which comprises the system. This is the main Technical Specification of the PROBE system. Sections 4 through 9 detail how each class of computer system within PROBE is actually configured, the software which runs on each system, and how each class of system is scheduled to operate at different stages of the day and week. This includes a detailed description of the automatic screening of candidates for Special Units, the generation of comparative statistics for the dispersal prisons each week, the production of daily quality control reports, and so on.

Volumes 5 and 6 list the fixed data dictionaries for the PROBE database at Adult and Young Offender sites respectively, along with example code for the data entry system. These two volumes specify precisely the predicates which are used to classify inmates, the range of the valid values for those predicates, and their labels. The adult system comprises 34 tables of predicates or relations. Development work within the Young Offender system, whilst relatively recent, illustrates that the system can be used as an effective substrate for behaviour science and technology within any convicted population which is practically concerned with Sentence Planning. The Sentence Management records illustrate how sophisticated use of relational theory can be used to extend the data dictionary ad infinitum without having to make structural changes to the data dictionary per se. From a technical perspective, this may well be a unique feature of the PROBE system.

Volume 7 lists the computer on-line help script for the system. This is basically the user's on-line manual which provides explicit, context sensitive instructions as to how each field in the database must be maintained, e.g. the codes for an inmate's index offence, his preconvictions, and where to find these in the prisoner's record. As changes are made to the system over time, new help scripts can be automatically distributed over the electronic network and automatically installed.

Volume 8 provides the material for a 3 day course on PROBE. Material covered includes basic programming using the 4th Generation Programming Language (PQL) provided with the system, illustrated examples of how to use a wide range of output procedures, and how to use more advanced programming facilities such as TABFILES and MATRIX operations. In effect, in conjunction with the reference manuals and other volumes of the System Specification, this volume comprises a comprehensive self-instruction course in the PQL programming language which is the basis of all of the report writing facilities within the PROBE system.

Volume 9 provides an illustration of the weekly statistics generated automatically by the system. These show comparative figures for the dispersal prisons, and for other PROBE maintaining prisons within the Long Term Category B estate. These statistics include the distribution of security category, sentence length, and so on across PROBE sites, first for the dispersal estate, and then the other category B prisons. Additional comparative statistics show control indices by wing within a prison, and between prisons. Rudimentary data on inmates who have been in Special Units are compared with normal location inmates, illustrating the potential for detailed follow up.

Volume 10 provides a list of the essential procedures held within the PROCEDURE file of the Data Base Management System. These Procedural Query Language routines are an essential part of the PROBE system at each installed site. As new systems are developed, they are automatically distributed to all field sites to ensure that all facilities are standardised.

Finally, a General Index is provided. Each of the preceding volumes is provided with references, subject indices, names indices, files indices and where appropriate a list of the attributes used within the Data Base Management System. This document collates all of those indices and references into one convenient reference volume. All volumes beyond volume 3 are essentially technical material. A summary of what the system can provide as a Management Information System can be gleaned from Volume 3, sections 2 and 3. Managers' attention is drawn to the fact that the comparative graphics and tables covered in those sections are refreshed each Sunday night, and are electronically available to Psychology Units every Monday morning. For specific coverage of PROBE in support of inmate activities and Sentence Planning, the reader is referred to volume 2 and section 3 of volume 3. For the specific rationale behind the system given what is known about normal human decision making and its constraints, the reader is referred to volume 1 and to section 1 of volume 3.

The documentation describes the PROBE system as it was when managed within the Directorate of Inmate Programmes. The 1994 reorganization of Headquarters led to the loss of the posts which developed and supported PROBE. Responsibility for day to day technical management of the system now lies with Prison Service IT Services, and policy responsibility with Custody Group. The future of the PROBE system therefore rests to a very large extent in the hands of field staff and it is hoped that the provided documentation goes some way towards consolidating the infrastructure which has been built up over the past eight years. It is also hoped that those now contributing towards the federated system will insist on the continuation of a high level of central support, maintenance and oversight of the system sufficient to sustain comparative profiling which is an essential element of overall system quality control. Quality control and feedback being the sine qua non of PROBE as a system, with the changes in system management in 1994, it must be emphasised that the integrity of the system now rests much more so in the hands of field staff. The future integrity and standardisation of the system will depend as much upon feedback to central support on the accuracy of the weekly statistical profiles made available on the central system, as it will on local quality control.

22 November 1997 (Draft)

Fragments Of Behaviour: The Extensional Stance -
A Theoretical Background to the PROBE/Sentence Management System

'I should like to see a new conceptual apparatus of a logically and behaviourally straightforward kind by which to formulate, for scientific purposes, the sort of psychological information that is conveyed nowadays by idioms of propositional attitude.'

W.V.O Quine (1978)
'On our view, many of the illusions cannot be dispelled by a "few moments' prompted reflection," or several months of college teaching; if dispelled, the illusions seem to return in full force the next time a similar situation comes along. Such illusions seem to be rooted very deeply in the human mind.'

The Persistence of Cognitive Illusions
P. Diaconisa and D. Freedman
The Behavioral and Brain Sciences (1981),4, p
'A bias towards vividness might well mean that a powerfully placed decision maker will act on the basis of unrepresentative, but highly vivid, personal experiences or anecdotes and ignore the "dull" results of large, well designed statistical surveys. It is of no practical value to consider whether such behaviour can, by some philosophical device, be deemed to be "rational." It is evident that such behaviour is undesirable, in the sense that it is likely to produce inefficient decisions and costly errors.'

J. St. B. T. Evans and P Pollard
The Behavioral and Brain Sciences (1981), 4 p 335

This paper reviews and analyses convergent lines of research which amount to a radical critique of cognitivism in its most problematic guise - inductive psychologism. It targets two widely accepted dogmas:

that past behaviour is the best predictor of future behaviour; and

that there is practical merit in treating 'cognition' as an independent variable when working to effect behavioural change.

The fundamental objective of this paper is to demonstrate why the above assumptions are false and to present instead alternative foundations upon which to build a professional science and technology of behaviour (primarily in the applied field of corrections (prisons)). This alternative presents the case that the body of normative principles referred to as science comprises a dynamic web of belief which is open to empirical revision through a formal process of conjecture and refutation (Popper 1965). It will be argued that such revisions occur both at the professional and individual levels, and according to the same procedures. As such, what is advocated here as a methodology for best practice in the profession of behaviour science and technology applies quid pro quo for change in an individual's observable behaviour. It is the thesis of this paper that all such developments must always be based upon the extensional stance, hence the weight given to explication of its modus operandi.

Additionally, it is fundamental to the case being made that the reader appreciates that there is, from this perspective, no means of advancing our current normative scientific standards outside that of the naturalistic process of empirical conjecture and refutation. That is to say, it is fundamental to the theme of this paper that the evolution of normative laws, including those of logical and mathematical analysis, has no foundation other than empirically demonstrable best practice (Quine 1951;1968).

Nevertheless, to appreciate why so much weight has been given to 'cognition' by contemporary psychologists, the reader is presented a brief but relevant history of psychology of recent psychology, focusing specifically on events which gave rise to what is now widely referred to as the 'Cognitive Revolution'. The two dogmas listed above, are, it will be argued, a consequence of a simple, but quite radical misinterpretation of the data which inspired this revolution and which has subsequently adversely changed the direction of much of contemporary research in experimental psychology.

To anticipate the conclusion somewhat, argument and evidence is presented to suggest that the only instances where past behaviour is said to be the best predictor of future behaviour is where the referred to behaviour has not changed. Under such conditions, the statement is not empirical, but tautologous. Secondly, what is widely referred to as 'cognitive behaviour', invariably reduces to publicly observable and measurable behaviours - a fact which renders the qualifier 'cognitive' redundant except as a convenient sub-classifier. In support of this statement, the following heuristic is offered: in all cases where cognitive skills are referred to as independent or dependent variables, the question should be asked: 'how are such variables observed and measured, and reported?'. Each class of behaviour has its own merits as foci for behavioural change, depending upon the objectives of the specific programme, but whatever the class of behaviour subject to intervention, there are constraints on what one can expect to achieve based upon what is now known about the context specificity of learning.

The following critique has practical implications for programmes such as 'Cognitive Skills Training' in corrections. The first consequence is that such programmes can only aspire to change context free verbal behaviour. Given the evidence which has accumulated in support of the context specificity of all skill acquisition and retrieval, claims for the efficacy of such programmes content must be regarded with scepticism, and alternative explanations (such as selection effects) for any empirically recorded efficacy of such programmes (see Longley 1997a). Resources should be focused on the establishment of areas of activity which offer the greatest scope for demonstrable skill acquisition to the greatest range of individuals. If efforts are not made to enable maximum opportunity for acquisition of new skills, there can be little rational basis for expecting behavioural change except through adventitious, ie uncontrolled, and therefore unmanaged processes - or ageing. So why do some psychologists give such weight to 'cognitive' processes? The explanation lies in a misconception of the nature of psychology.

The impetus for the 'Cognitive Revolution' came in the mid 1950s, largely through the influence of the work of Jerome Bruner, who summarised his influential series of studies on judgement and reporting as follows:

'The most characteristic thing about mental life, over and beyond the fact that one apprehends the events of the world around one, is that one constantly goes beyond the information given'.

J Bruner (1957)
Going Beyond The Information Given
(in H Gulber and others (eds) Contemporary Approaches to Cognition

These and later studies demonstrated that people naturally have a tendency to report what they expect to be the case rather than what objectively is case, making expectancy, or a priori heuristics, important research phenomena in their own right. In one famous study (later used by T.S. Kuhn in his influential book 'The Structure of Scientific Revolutions', playing cards of an atypical suit/colour combination (e.g. a red 5 of spades, or black 3 of hearts) were standardly reported as convention would expect (ie as a standard 5 of spades, 3 of hearts). In other investigators' experiments, conflicts in accurately reporting name/feature combinations such as the word BLUE written in red ink (the 'Stroop Effect') were widely replicated, but conventionally studied to shed light on levels of information processing with the objective of differentially testing the relative merits of alternative models of memory and attention. In all of this research, the objective has always been to model cognitive processes - however, in the present context, the reader is asked to consider such phenomena as testament to the context dependency and general unreliability of natural human decision making when compared to formal, normative standards. Whilst there are indubitably good reasons to provide good and predictive models of the natural processes of human judgement and decision making, such research can also be cited as evidence of the extent to which a priori heuristics and processing limitations influence accurate observational reporting. Throughout the 1950s and 60s, a range of perceptual illusions were cited in support of the non-veridical or inferential nature of perception, a feature also referred to as 'perceptual readiness'. In other studies, Postman and colleagues provided experimental evidence in support of the 'Chinese Whispers' phenomenon - a classic series of experiments on serial reproduction - the propagation and distortion which occurs when messages or reports are serially repeated from one individual to another. This is the psychology of rumour, but could also be called the psychology of reporting.

In other research programs, psychologists studied related phenomena, first under the auspices of 'Cognitive Dissonance' (Festinger 1957). The extent to which these processes were aptly described as cognitive was challenged at the time by radical behaviourists such as Bem (1963) who urged psychologists to interpret the same phenomena in terms of adventitious nature of self-perception and operant conditioning. In the late 1960s and 1970s such work dominated research in Personality and Social Psychology, but under the new rubric of 'Attribution Theory' (Kelley 1955;1967) which provided detailed studies of what came to be known as the fundamental attribution error. Such research provided a substantial body of evidence in support of Bruner's thesis that, in making assessments of events and their relations (including assessments of one's own behaviour), folk manifest a remarkable tendency to 'go beyond the information given', resorting well circumscribed heuristics. From this perspective, the entire discipline of empirical psychology can be viewed as the study of the modus operandi of 'folk psychology' or 'common-sense'. This being so, a clear distinction must be made between a descriptive account of natural judgmental heuristics on the one hand, and the development and application of normative standards of science and technology on the other. The extensional stance outlined here explicates a framework for the delivery of an effective applied behaviour science based on the latter.

Judgmental heuristics are embodied in natural language, and have been identified, largely by philosophers of language (Quine 1956,1960; Davidson 1970) as the logically anomalous idioms of 'propositional attitude'. Whilst anomalous, in the absence of suitable alternatives, they remain essential modi vivendi, and will be retained as part of each culture's natural repertoire and be disseminated via the family, media and major social institutions as normal social convention until more valid and reliable systems of prediction are identified through research and development. The natural mode of distribution of social convention is narrative (Bruner 1990), which accounts for such processes being the focus of so many psychological research programmes to date. It is because of the pragmatic daily demands for a viable folk psychology that it is so difficult to sustain support for research and development based upon the extensional stance - its whole approach differs quite dramatically from the popular conception of the role of the conventional psychologist, a point which is particularly marked with respect to the place of 'cognition' in behaviour analysis and management. From the extensional stance, 'cognition' requires explanation - it is not to be used as explanation.

In contrast to narrative, it is only in the latter half of this century that formal language systems, grounded in deductive logic have been developed to the extent that they can now be widely deployed in the guise of relational database. technology. These developments have been dependent on the widespread use of the digital computer, and, more importantly, the development of programming. Such developments now enable professionals to build formal models which are reliable enough to replace intuitive or expert opinion. With the support of such technology, descriptive cognitive psychology and neuroscience are likely to be of value only to the extent that they reveal limitations on human competence and performance, characteristics which are largely irrelevant to the practising behavioural scientist tasked with providing analyses of distributions of data. The former disciplines are more likely to provide data of interest to cultural anthropologists and biologists than applied behavioural professionals.

In the wake of the 1879 Fregian revolution in formal logic, the scientific enterprise as the pursuit of truth became much clearer. Owing much to the work of members of the Vienna Circle in the 1930s, three affiliates in particular, R. Carnap, K.R Popper and W.V.O Quine, pressed logical empiricism to its limits throughout the 20th century. Quine, departing from Carnap in identifying intensional idioms as anomalous locutions within ordinary language, suggested how and why the pursuit of truth might be impeded unless a distinction is made between a merely instrumental acceptance of the intensional idioms of propositional attitude as elements of a theoretical folk psychology on the one hand, and the formal integrity of the 'web of (scientific) belief' on the other. The logically anomalous mentalistic (intensional) idioms, 'thinks that', 'believes that' (the verbs of propositional attitude) can have no formal place in a unified science which can be regimented within the language of the predicate or functional calculus, and must ultimately, be replaced (in science at least), by suitable extensional alternatives. This Quinean program promises a far more fruitful basis upon which to build a profession of behaviour analysis and management than has hitherto been possible, and provides a clear if anomalous status to the idioms of propositional attitude as elements of folk psychology's modus vivendi.

Folk psychological lore, like folk physical lore, is learned developmentally and inductively. The resulting cultural diversity, whilst at one time quite vast, is now more apparent in more subtle forms such as individual dialects and local lore, and ultimately individual 'personality'. It is shared socially within human cultures via narrative processes, many of which exert their influence early in life from parents, siblings and other significant others. The modus operandi of these fundamental social processes have been studied by psychologists of various theoretical persuasions under the rubrics of 'conditioning', 'learning', or 'attribution'. In recent decades they have also been studied by a range of academics collectively referring to themselves (oxymoronically perhaps) as 'cognitive scientists'. Yet there are professional demands which extend far beyond such descriptive accounts - a fact which renders the majority of professionals helpless when tasked with providing an accountable service in the assessment and management of behaviour. Here, evidence must play the key role. It is here that the knowledge of the professional behaviour scientist meets that of the folk psychologist, and it is here that the former has to demonstrate provision of a service which can be readily identified as providing good value for money. This paper sets out to explicate how and why when this confrontation takes place the psychologist is rarely able to provide an effective contribution for want of adequate data and the skills to analyse it.

Whilst psychology strives to provide an account (ultimately with the help of neuroscience) of how folk naturally make sense of the regularities in the world (how folk come to assert and act as if one event can be predicted or expected on the basis of another), it must be understood clearly that such accounts do not aim to provide a legitimatization or validation of such processes or events. Yet, in an insidious manner, the blurring of the difference between the descriptive and normative is precisely what has occurred in much of contemporary applied psychology. What begins merely as a descriptive account of 'natural assessments' or judgements, is all too readily accepted as validation of such processes, largely because professionals so rarely have access to the distributional data needed to permit them to do otherwise. This in turn is a consequence of the fact that the majority of professionals are unwilling or unable to identify quantitative analysis of observations as their primary professional responsibility. As a consequence, the professional psychologist's role has widely degenerated into one amounting to little more than serving as a folk psychological reference point - as expert folk-psychologist among many practising folk psychologists'. One of the objectives below is to explicate the practical differences between the normative analysis and management of behaviour ('The Extensional Stance'), from the cognitivist approach ('The Intentional Stance'). Whilst folk psychology is the natural repository of folk wisdom, shared through narrative, it is so prone to bias and distortion that it is effectively useless as a reliable data source for the professional behavioural scientist/technologist. When Dennett published 'The Intentional Stance'' in 1987, he outlined it merely as an instrumentalist position:

'The intentional stance is the strategy of prediction and explanation that attributes beliefs, desires, and other "intentional states" to systems (living and nonliving) and predicts future behavior from what it would be rational for an agent to do, given those beliefs and desires; such systems are intentional systems. The strategy of treating parts of the world as intentional systems is the foundation of "folk psychology" but is also exploited in artificial intelligence and cognitive science more generally, as well as in evolutionary theory. The analysis of the intentional stance grounds a theory of the mind and its relation to the body.'

D C Dennett (1988)
Précis of The Intentional Stance.
Behavioral and Brain Sciences; Sep Vol 11(3) 495-546

Yet the demands of the applied professional are different unless the objective is to record the narrative processes in their own right. It is important that the reader understands that the viability of Dennett's 'Intentional Stance' depends on 'the rationality assumption', ie that it is valid to assume that 'beliefs' and 'desires' can be analysed logically. It is precisely this assumption that is challenged in the pages which follow. Evidence is presented below which strongly suggests that this assumption is largely responsible for the minimal progress achieved to date by 'cognitivist' approaches. Behavioural technology, is, therefore, not an application of psychology per se, but, from the stance developed in this paper, an instantiation of 'Artificial Intelligence'. Applied effectively (computationally.), 'Artificial Intelligence', has to be contrasted with what Kahneman, Slovic and Tversky (1982) have called 'natural assessments' or what Agnoli and Krantz (1989) called 'intensional heuristics'. The latter are the intuitive and inductive (or associative) processes, studied by psychologists as conditioning. Artificial Intelligence on the other hand - ie formal intelligence as measured and technologically implemented, is always a normative measure (or instantiation) of non-psychological skills. These reduce, in any culture fair test, to an assessment (or instantiation) of formal logical operations. Indeed, to the extent that such skills are instantiations of logical or quantitative skills, it is likely that as these are more widely implemented the services of professional psychologists will become more and more difficult to distinguish from other well educated staff . There signs that this is already happening:

'There has been a long and unresolved debate over what the Prison Service employs psychologists for: only 3 of the 100 or so outside of HQ are clinical psychologists and even though many of the others do some work with inmates much psychological input goes into management services, operational research and training..It is not easy to resolve the position of psychologists within the organisational structure without answering the more basic question of what the role of psychologists should be.'

Review of Organisation & Location Above Establishment Level
HM Prison Service - PA Consultants (1989)

One finds psychologists working in HIV counselling, hostage situation training, staff selection, - in fact just about every area of staff deployment. What one does not find is their widespread deployment in the systematic collection and analysis of behavioural data in support of effective inmate behaviour management. This paper asserts that one can not justify the deployment of psychologists in an area simply on the grounds that it is an area of human behaviour. To do so is both to mismanage psychologists and misunderstand the nature of psychology. Whilst any area of human activity can benefit from the application of normative analysis, this is only because under some circumstances, such staff are prone to resort to the principles of folk psychological judgement. Those circumstances are when their professional skills prove technically inadequate and they find themselves forced to act outside their professional role. Where the skills of the behaviour scientist are required is in the quantitative analysis of behaviour in support of other professionals. Nowhere are the professional skills of quantitatively skilled behaviour scientists required more urgently than in such a capacity - yet it is precisely this role of the psychologist as behaviour scientist that is grossly underappreciated, and probably for want of a sound knowledge of the history of psychology itself:

'History may well record that towards the middle of the twentieth century many classical problems of philosophy and psychology took on renewed interest and vigour with the emergence of a new (and not yet well understood) notion of mechanism. While the development of this notion has many roots within philosophy (especially in studies of the foundations of mathematics by Alonzo Church and others) the major milestone was probably the formalization of the idea of computation by Alan Turing in 1936. This work, in a sense, marked the beginning of the study of cognitive activity from an abstract point of view, divorced in principle from both biological and phenomenological foundations. It provided a reference point for the scientific ideal of a mechanistic process which could be understood without raising the spectre of vital forces or elusive homunculi, but which at the same time was sufficiently rich to cover every conceivable informal notion of mechanism. It would be difficult to overestimate the importance of this development for psychology.......there is a growing feeling, not only among those working in AI but also among more enlightened experimental psychologists, that the study of intelligence cannot be decomposed along such traditional lines as those which, say, mark off typical elementary textbook chapter headings....As Donald Michie (1971) has put it, in speaking of recent developments in AI, ' ...we now have as a touchstone the realization that the central operations of the intelligence are ...transactions on a knowledge base'. ....psychologists have opted for a type and size of parcel which many people, particularly in AI, are beginning to feel is fundamentally wrong-headed.'

Z. W. Pylyshyn (1979)
Complexity and The Study of Artificial and Human Intelligence
in M D Ringle (Ed) Philosophical Perspectives in Artificial Intelligence

1: Methodological Solipsism & The Intentional Stance

'A cognitive theory with no rationality restrictions is without predictive content; using it, we can have virtually no expectations regarding a believer's behavior. There is also a further metaphysical, as opposed to epistemological, point concerning rationality as part of what it is to be a PERSON: the elements of a mind - and, in particular, a cognitive system - must FIT TOGETHER or cohere .......no rationality, no agent.'

C. Cherniak (1986)
Minimal Rationality p.6
'Complexity theory raises the possibility that formally correct deductive procedures may sometimes be so slow as to yield computational paralysis; hence, the "quick but dirty" heuristics uncovered by the psychological research may not be irrational sloppiness but instead the ultimate speed-reliability trade-off to evade intractability. With a theory of nonidealized rationality, complexity theory thereby "justifies the ways of Man" to this extent.'

ibid p.75-76

Dennett's (1987) 'The Intentional Stance' focused on one of three alternative stances, the others being the physical and design stances. In advocating the intentional, he did so as an instrumentalist, accepting that its efficacy is dependent upon an important assumption - that the individual one is trying to predict or understand, using the intentional stance, behaves rationally. This is 'the rationality assumption'.

The establishment of coherence or incoherence depends on a commitment to clear and accurate recording and analysis of observations and their relations within a formal system. Biological constraints on both neuron conduction velocity and storage capacity impose such severe constraints on natural human information processing capacity that we are restricted to using heuristics rather than the recursive functions which are used by effective computer programs. This would not be such a problem if it were not for the fact that nature does not reliably present its laws in representative samples.

For many routine applications, it is now widely accepted that non-human computers offer a far more reliable set of procedures for analysing information than does natural intuitive human judgement - preference for the former being based on little more than familiarity and an innate neo-phobia. The superior reliability of the computer is true at least with respect to decidable systems of the propositional calculus and first order predicate calculus with monadic predicates. Such systems allow automated execution of algorithms written to solve mathematical and logical problems. Yet it still comes as a surprise to many that this is also true of almost all areas of human expertise - all that can be clearly explicated can be computed. Whilst the main practical reason for writing this paper is to explicate the practice implications of the difference between descriptive 'folk psychology' and that of the scientific analysis of behaviour, it is also in part motivated by the author having been in a position for some time where he has both taught and supported applied psychologists in the use of deductive (computer based relational database 4GL programming) as well as inductive (inferential statistical) inference. These responsibilities in turn came after several years of research into the neural basis of reinforcement, incentive motivation and habit formation in the early 1980s. Reviews of the published literature on the teaching and transfer of formal skills, referenced in the following pages shed some light on the clear difficulties which many, otherwise well accomplished individuals, clearly have in effectively applying the practical implications of such technology in the service of a professional analysis of behaviour.

Some very influential work in mathematical logic this century has suggested that certain domains of concern simply do not fall within the scope and language of science (Quine 1956). That work suggests, in fact, that psychological idioms belong to a domain resistant to the tools of scientific analysis in that they flout a basic precondition for valid inference. Whilst this thesis has been known to logicians for nearly a century, (and whilst various studies in psychology in the 1940s and 50s might have alerted us earlier), clear and influential empirical evidence casting doubt on the 'rationality assumption' only began to accumulate throughout the 1970s and 1980s as a result of behavioural decision theory research in psychology and medicine. (Davidson 1959; Kahneman, Slovic and Tversky 1982; Arkes and Hammond 1986). This research provided a substantial body of evidence that human judgement is not adequately modelled by the axioms of subjective probability theory (ie Bayes Theorem, cf. Savage 1954; Cooke 1991), or formal logic (Wason 1966; Johnson-Laird and Wason 1972), and that in all areas of human judgement, quite severe biases of judgement are common. Research suggests that this is at least partially attributable to basic neglect of base rates (where base-rate refers to prior probabilities or relative frequencies of behaviours in the population). The evidence now strongly suggests that natural, common sense judgements are often little more than educated guesses, ('heuristics' or 'rules of thumb') prone to well documented biases. These heuristics, such as 1) the ready 'availability' of information, or 2) its 'representativeness'(similarity to stereotype) are characteristic of what psychologists have long studied under the rubric of conditioning rather than what most folk in moments of reflection consider to be 'reasoning'. Yet, as will be seen from what follows, it may well be the case that much of 'reasoning' is in fact no more than the context specific application of a priori rules - intensional heuristics learned through adventitious conditioning.

To some, this body of research has been taken to progressively undermine the very foundations of Cognitive Science, which takes rationality and substitutivity as axiomatic. It renders the 'Intentional Stance' (Dennett 1987) and Davidson's 'Principle of Charity' unrealistic instrumental strategies in that one can not assume that subjects are largely rational. Only to the extent that individuals have been receptive to the majority of social rules can they be deemed rational - a factor one should be wary of assuming to any great extent when the focus is on delinquent behaviour.

The literature since Wason's experiments in the 1960s provide some consolation to the teacher who finds it difficult to teach the general use of deductive reasoning skills, for it is notoriously difficult to teach such skills with the objective of having them applied to practical problems outside of the training context. What seems to happen, despite efforts to achieve the contrary, is that skills which are acquired, are both acquired and applied as intensional, inductive heuristics tied to the training examples, rather than learned as a set of formal rules or 'cognitive skills'.

This logical analyses and review of empirical research presented in this paper sets out to provide a rationale for the system of inmate management and assessment outlined elsewhere in this series as 'Sentence Management'. This rationale requires a clear understanding of Brentano's Thesis (Quine 1960) also known as 'the problem of intensionality', or 'the content-clause problem'.

'One may accept the Brentano thesis as showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention. My attitude, unlike Brentano's, is the second. To accept intentional usage at face value is, we saw, to postulate translation relations as somehow objectively valid though indeterminate in principle relative to the totality of speech dispositions. Such postulation promises little gain in scientific insight if there is no better ground for it than that the supposed translation relations are presupposed by the vernacular of semantics and intention.'

W. V. O. Quine
The Double Standard: Flight from Intension
Word and Object (1960), p218-221
"The keynote of the mental is not the mind it is the content-clause syntax, the idiom 'that p'".

W. V. O. Quine (1990)
Intension
The Pursuit of Truth p.71

Quine's gloss on Brentano's Thesis is that there can be no scientific analysis (no reliable application of the laws of logic or mathematics) to the domain of intensional phenomena. Since the language of psychology is intensional, this has devastating (or enlightening) consequences for that part of psychology which is couched in intensional language. It is devastating because intensional locutions flout the very axioms which mathematical, logical and computational processes (the language of science) must assume for deductive inference to be valid. From the demonstrable fact that logical quantification is unreliable within intensional contexts it follows that within such contexts both p and not-p could be held as truth values for the same proposition, and any system which allows such inconsistency allows any conclusion whatsoever to be inferred - a definition of equivocation which is fatal within any rational system/theory. This paper marshals evidence in support of the thesis that many of the difficulties vanish once one appreciates that methodologically, behaviour science requires the extensional analysis of behaviour and not interpretation or analysis of psychological idioms of propositional attitude. The methodology of behaviour science and technology is exclusively deductive and analytical:

'If we are limiting the true and ultimate structure of reality, the canonical scheme for us is the austere scheme that knows no quotation but direct quotation and no propositional attitudes but only the physical constitution and behavior of organisms.'

W.V.O Quine (1960)
Word and Object p 221

Quine's analysis of the mental idiom renders 'psychology' and behaviour science two very different disciplines, with very different methods and ontological status. The focus is on language, and anomalies which have evolved along with the growth of language. One class of terms is the 'extensional' and the other the logically anomalous 'intensional'. Verbatim extracts from a selection of the relevant literature are presented to illustrate the practical implications which analysis of the subordinate clause has for applied, practical work of criminological psychologists. Verbatim - because it is sentences and not propositions, which are true or false in science, and:

'Once it is shown that a region of discourse is not extensional, then according to Quine, we have reason to doubt its claim to describe the structure of reality.'

C. Hookway (1988)
Logic: Canonical Notation and Extensionality
Quine

The dilemma for intensional (common sense or 'folk') psychology is outlined by Nelson (1992):

'The trouble is, according to Brentano's thesis, no such theory is forthcoming on strictly naturalistic, physical grounds. If you want semantics, you need a full-blown, irreducible psychology of intensions.
'There is a counterpart in modern logic of the thesis of irreducibility. The language of physical and biological science is largely extensional. It can be formulated (approximately) in the familiar predicate calculus. The language of psychology, however, is intensional. For the moment it is good enough to think of an intensional sentence as one containing words for intensional attitudes such as belief.
'Roughly what the counterpart thesis means is that important features of extensional, scientific language on which inference depends are not present in intensional sentences. In fact intensional words and sentences are precisely those expressions in which certain key forms of logical inference break down.'

R. J. Nelson (1992)
Naming and Reference p.39-42
and most explicitly by Place (1987):
'The first-order predicate calculus is an extensional logic in which Leibniz's Law is taken as an axiomatic principle. Such a logic cannot admit 'intensional' or 'referentially opaque' predicates whose defining characteristic is that they flout that principle.'

U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy Eds. S. Modgil & C. Modgil

 

The intension of a word or sentence is its 'meaning', or the property it conveys. It is sometimes used almost synonymously with the 'proposition' or 'content'. The extension of a term or sentence on the other hand is the class of designata of which the term or sentence can be said to be true. Thus, things belong to the same extension of a term or sentence if they are the same members of the designated class, whilst things share the same intension, (purportedly) if they share the same property. But here, there's a problem -.Quine (1987) explains it thus:

'If it makes sense to speak of properties, it should make clear sense to speak of sameness and differences of properties; but it does not. If a thing has this property and not that, then certainly this property and that are different properties. But what if everything that has this property has that one as well, and vice versa? Should we say that they are the same property? If so, well and good; no problem. But people do not take that line. I am told that every creature with a heart has kidneys, and vice versa; but who will say that the property of having a heart is the same as that of having kidneys?
'In short, coextensiveness of properties is not seen as sufficient for their identity. What then is? If an answer is given, it is apt to be that they are identical if they do not just happen to be coextensive, but are necessarily coextensive. But NECESSITY, q.v., is too hazy a notion to rest with.
'We have been able to go on blithely all these years without making sense of identity between properties, simply because the utility of the notion of property does not hinge on identifying or distinguishing them. That being the case, why not clean up our act by just declaring coextensive properties identical? Only because it would be a disturbing breach of usage, as seen in the case of the heart and kidneys. To ease that shock, we change the word; we speak no longer of properties, but of CLASSES.
'We must acquiesce in ordinary language for ordinary purposes, and the word 'property' is of a piece with it. But also the notion of property or its reasonable facsimile that takes over, since these contexts never hinge on distinguishing coextensive properties. One instance among many of the use of classes in mathematics is seen under DEFINITION, in the definition of number. For science it is classes SI, properties NO.'

W. V. O. Quine (1987)
Classes versus Properties

QUIDDITIES:

Quine (1956,1960,1992,1995) urges us to accept that the scope and language of science is entirely extensional, that the intensional is purely attributive, instrumental or creative, and that there can be no universal language of thought or 'mentalese', as the latter would presume determinate translation relations - a possibility which his indeterminacy of translation argument was designed to discredit. Instead, we are asked to accept that different languages are different systems of behaviour which may achieve similar ends. They do not, however, support direct, determinate translation relations. Despite Quine's (1960) indeterminacy thesis, we still (for want of education perhaps? behave 'as if' it is legitimate to directly translate (substitute), and we do this not only within our own language, but within our thinking. Quine's point is still fundamentally a point of mathematical logic and philosophy, perhaps in time it will become more familiar..

The intensional idioms with which we are most concerned in our day to day transactions with one another are the so called 'propositional attitudes' - 'saying that', 'remembering that', 'believing that', 'knowing that', 'hoping that' and so on. If we report that someone said that he hated his father, it is often the case that we do not report what is articulated verbatim. Instead, we frequently 'approximate' the 'meaning' of what was said and consider this legitimate so long as the 'meaning' is preserved. Unfortunately, this assumes that, in contexts of propositional attitude, ie the primary vehicle of psychological expression - we are free to substitute terms or phrases in the subordinate clauses, ie what comes after that. We can do this within extensional contexts, 7+3 can be substituted for 5+5 as co-extensive with 10. Doing so is fundamental to the solution of many mathematical problems. By analogy, or simply through not making a distinction, it is naturally assumed that inference within intensional contexts is equally valid - but it is demonstrably, and to some, (quite surprisingly), not.

Nobody would report that if Oedipus said that he wanted to marry Jocasta that he said that he wanted to marry his mother. The problem with intensional idioms is that they do not support substitutivity of identicals salva veritate. Terms which might seem to be equal in meaning can not be substituted for one another whilst still preserving the truth functionality of the contexts within which they occur. The original contexts must be preserved. In other words, they can only be directly quoted verbatim as behaviours. Substitution of co-referential identical 'salva veritate' is Leibniz's Law, and is the basic extensional axiom of first order logic. It is, therefore, a law which underpins all valid inference. One of the objectives of this paper is therefore to specify in practical detail how and why one must adopt the extensional stance with respect to inmate reporting which does not flout Leibniz's Law. This is a serious challenge to much of current practice in applied criminological psychology. Whilst the example cited above is a simple one, it is nevertheless, representative of the problem facing practising psychologists, who, often ignorant of the logical constraints on intensional contexts, commit serious logical fallacies in their reporting practices which render much of their report writing and expert advice little more than 'creative fiction' rather than expert scientific analyses based on evidence. Dretske (1980) put the problem as follows:

'If I know that the train is moving and you know that its wheels are turning, it does not follow that I know what you know just because the train never moves without its wheels turning. More generally, if all (and only) Fs are G, one can nonetheless know that something is F without knowing that it is G. Extensionally equivalent expressions, when applied to the same object, do not (necessarily) express the same cognitive content. Furthermore, if Tom is my uncle, one can not infer (with a possible exception to be mentioned later) that if S knows that Tom is getting married, he thereby knows that my uncle is getting married. The content of a cognitive state, and hence the cognitive state itself, depends (for its identity) on something beyond the extension or reference of the terms we use to express the content. I shall say, therefore, that a description of a cognitive state, is non-extensional.'

F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294

For the discipline of psychology, the above logical analyses can be taken either as a vindication of 20th century behaviourism/physicalism (Quine 1960,1990,1992;1995) or as a knockout blow to 20th century 'cognitivism' and psychologism (methodological solipsism) as viable methodological frameworks for applied behaviour scientists. In 1980, Jerry Fodor published an influential paper entitled 'Methodological Solipsism Considered as a Research Strategy for Cognitive Psychology' in which he advocated that Cognitive Psychology adopt a stance which explicated the way that subjects make sense of the world from their 'own particular point of view'. This was to be contrasted with the objectives of 'Naturalistic Psychology' or 'Evidential Behaviourism'.

Methodological solipsism, as opposed to methodological behaviourism, investigates 'cognitive processes', mental contents (meanings/propositions) or 'propositional attitudes' of folk/common-sense psychology at face value. It proposes that there is a 'Language of Thought' (Fodor 1975), that there is a universal 'mentalese' which natural languages map onto, and which express thoughts as 'propositions'. It examines the apparent causal relations and processes of 'attribution' which comprise the modus operandi of this common-sense or folk psychology. But, it also accepts what is known as the 'formality condition', namely that thinking is a purely formal, syntactic, computational affair which therefore has no room for semantic notions such as truth or falsehood. Such computational processes are therefore indifferent to whether beliefs are about the world per se (can be said to have a reference), or are just the imaginings of the belief holder. Yet as will be shown later, from a logical stance, 'beliefs' are not subject to 'existential quantification'. Examples of what all this entails might be helpful here, since the implications are far ranging, and have a bearing on 'transfer of training', 'generalisation decrement', 'inductive vs. deductive inference', and the distinction between 'heuristics' and 'algorithms'. Here is how Fodor summarized his paper:

'Explores the distinction between 2 doctrines, both of which inform theory construction in much of modern cognitive psychology: the representational theory of mind and the computational theory of mind. According to the former, propositional attitudes are viewed as relations that organisms bear to mental representations. According to the latter, mental processes have access only to formal (nonsemantic) properties of the mental representations over which they are defined. The following claims are defended: (1) The traditional dispute between rational and naturalistic psychology is plausibly viewed as an argument about the status of the computational theory of mind. (2) To accept the formality condition is to endorse a version of methodological solipsism. (3) The acceptance of some such condition is warranted, at least for that part of psychology that concerns itself with theories of the mental causation of behavior. A glossary and several commentaries are included.'

J A Fodor (1980)
Methodological solipsism considered as a research strategy in cognitive psychology.
Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

Some of the commentaries, particularly those by Loar or Rey clarify the issues:

'If psychological explanation is a matter of describing computational processes, then the references of our thoughts do not matter to psychological explanation. This is Fodor's main argument....Notice that Fodor's argument can be taken a step further. For not only are the references of our thoughts not mentioned in cognitive psychology; nothing that DETERMINES their references, like Fregian senses, is mentioned either ...Neither reference nor reference-determining sense have a place in the description of computational processes.'

B. F. Loar Ibid p.89

Not all of the commentaries were as formal, as the following makes clear:

'Fodor thinks that when we explain behaviour by mental causes, these causes would be given "opaque" descriptions "true in virtue of the way the agent represents the objects of his wants (intentions, beliefs, etc.) to HIMSELF" (his emphasis). But what an agent intends may be widely different from the way he represents the object of his intention to himself. A man cannot shuck off the responsibility for killing another man by just 'directing his intention' at the firing of a gun: 'I press a trigger - Well, I'm blessed! he's hit my bullet with his chest!"'

P. Geach ibid p80

The Methodological Solipsist's stance is clearly at odds with what is required to function effectively as an applied criminological psychologist if 'functional effectiveness' is taken to refer to intervention in the behaviour of an inmate with reference to his environment. Here's how Fodor contrasted methodological solipsism with the naturalistic approach:

'..there's a tradition which argues that - epistemology to one side - it is at best a strategic mistake to attempt to develop a psychology which individuates mental states without reference to their environmental causes and effects...I have in mind the tradition which includes the American Naturalists (notably Pierce and Dewey), all the learning theorists, and such contemporary representatives as Quine in philosophy and Gibson in psychology. The recurrent theme here is that psychology is a branch of biology, hence that one must view the organism as embedded in a physical environment.
The psychologist's job is to trace those organism/environment interactions which constitute its behavior.'

J. Fodor (1980) ibid. p.64

That function is clearly and exclusively the professional role advocated in this paper for the Applied Behaviour Analyst.

2. The Fragmentary Nature of Behavioural Skill Acquisition and Application

'the modern.....position is that learned problem-solving skills are, in general, idiosyncratic to the task.'

A. Newell (1980).

Returning to Fodor's paper, here is how Stich (1991) reviewed Fodor's position ten years on:

'This argument was part of a larger project. Influenced by Quine, I have long been suspicious about the integrity and scientific utility of the commonsense notions of meaning and intentional content. This is not, of course, to deny that the intentional idioms of ordinary discourse have their uses, nor that the uses are important. But, like Quine, I view ordinary intentional locutions as projective, context sensitive, observer relative, and essentially dramatic. They are not the sorts of locutions we should welcome in serious scientific discourse. For those who share this Quinean scepticism, the sudden flourishing of cognitive psychology in the 1970s posed something of a problem. On the account offered by Fodor and other observers, the cognitive psychology of that period was exploiting both the ontology and the explanatory strategy of commonsense psychology. It proposed to explain cognition and certain aspects of behavior by positing beliefs, desires, and other psychological states with intentional content, and by couching generalisations about the interactions among those states in terms of their intentional content. If this was right, then those of us who would banish talk of content in scientific settings would be throwing out the cognitive psychological baby with the intentional bath water. On my view, however, this account of cognitive psychology was seriously mistaken. The cognitive psychology of the 1970s and early 1980s was not positing contentful intentional states, nor was it (adverting) to content in its generalisations. Rather, I maintained, the cognitive psychology of the day was "really a kind of logical syntax (only psychologized). Moreover, it seemed to me that there were good reasons why cognitive psychology not only did not but SHOULD not traffic in intentional states. One of these reasons was provided by the Autonomy argument.'

Stephen P. Stich (1991)
Narrow Content meets Fat Syntax

in MEANING IN MIND - Fodor And His Critics and writing with others in 1991, even more dramatically:

'In the psychological literature there is no dearth of models for human belief or memory that follow the lead of commonsense psychology in supposing that propositional modularity is true. Indeed, until the emergence of connectionism, just about all psychological models of propositional memory, except those urged by behaviorists, were comfortably compatible with propositional modularity. Typically, these models view a subject's store of beliefs or memories as an interconnected collection of functionally discrete, semantically interpretable states that interact in systematic ways. Some of these models represent individual beliefs as sentence like structures - strings of symbols that can be individually activated by their transfer from long-term memory to the more limited memory of a central processing unit. Other models represent beliefs as a network of labelled nodes and labelled links through which patterns of activation may spread. Still other models represent beliefs as sets of production rules. In all three sorts of models, it is generally the case that for any given cognitive episode, like performing a particular inference or answering a question, some of the memory states will be actively involved, and others will be dormant......
The thesis we have been defending in this essay is that connectionist models of a certain sort are incompatible with the propositional modularity embedded in commonsense psychology. The connectionist models in question are those that are offered as models at the COGNITIVE level, and in which the encoding of information is widely distributed and subsymbolic. In such models, we have argued, there are no DISCRETE, SEMANTICALLY INTERPRETABLE states that play a CAUSAL ROLE in some cognitive episodes but not others. Thus there is, in these models, nothing with which the propositional attitudes of commonsense psychology can plausibly be identified. If these models turn out to offer the best accounts of human belief and memory, we shall be confronting an ONTOLOGICALLY RADICAL theory change - the sort of theory change that will sustain the conclusion that propositional attitudes, like caloric and phlogiston, do not exist.'

W. Ramsey, S. Stich and J. Garon (1991) (my emphasis)
Connectionism, eliminativism, and the future of folk psychology.

The implications here are that progress in applying psychology will be impeded if psychologists persist in trying to talk about, or use psychological (intensional) phenomena within a framework (evidential behaviourism) which inherently resists quantification into such terms. Without bound, extensional predicates, we can not reliably use the predicate calculus, and without the predicate (functional) calculus we can not formulate lawful relationships, statistical or determinate. This methodologically solipsistic or intentional position is, surprisingly, pervasive within the applied psychological profession.

Folk psychology reflects how individuals and groups use socially conditioned (induced) intensional heuristics, and how these are at odds with what we now know to be formally optimal (valid) from the stance of the objective (extensional) sciences. Accordingly, the primary objective of the applied psychologist must be the extensional analysis of observations of behaviour (Quine 1990) with any intervention or advice being based exclusively on extensionally derived data To attempt to understand or describe behaviour without reference to the environment within which it occurs is not only likely to result in partial accounts of behaviour at best (a point made long ago by Brunswick and Tolman (1933)), but to fail to appreciate the constraints on reporting of observations is to treat self-assessment/report as valid and reliable sources of data, in spite of the evidence the contrary. Like 'folk physics', 'folk psychology' has been documented and its deficiencies highlighted. It can now be demonstrated how and why the intensional is unreliable

The following pages cite some examples of research which looks at the use of intensional heuristics. The first looks at the degree to which intensional heuristics can be trained, a development of work initially undertaken by Nisbett and Krantz (1983). Whilst responses, or behaviour generalisation (the transfer of training) to new problems is the focus of this part of the paper, it should be noted as Nisbett and Wilson (1977) clearly pointed out, that subjects' own self-perception ('awareness') should not be given undue weight when assessing the efficacy of transfer. Instead, such transfer should be extensionally assessed differential placement in contexts which require transfer of skills.

'Ss were trained on the law of large numbers in a given domain through the use of example problems. They were then tested either on that domain or on another domain either immediately or after a 2-wk delay. Strong domain independence was found when testing was immediate This transfer of training was not due simply to Ss' ability to draw direct analogies between problems in the trained domain and in the untrained domain. After the 2-wk delay, it was found that (1) there was no decline in performance in the trained domain and (2) although there was a significant decline in performance in the untrained domain, performance was still better than for control Ss. Memory measures suggest that the retention of training effects is due to memory for the rule system rather than to memory for the specific details of the example problems, contrary to what would be expected if Ss were using direct analogies to solve the test problems.'

Fong G. T. & Nisbett R. E. (1991)
Immediate and delayed transfer of training effects in statistical reasoning.
Journal of Experimental Psychology General; 1991 Mar Vol 120(1) 34-45

Note that the authors report a decline in performance after the delay, a point taken up and critically discussed by Ploger and Wilson (1991). In fact, upon reanalysing the Fong and Nisbett's results, these authors concluded:

'The data in this study suggest the following argument: Most college students did not apply the LLN [Law of Large Numbers] to problems in everyday life. When given brief instruction on the LLN, the majority of college students were able to remember that rule. This led to some increase in performance on problems involving the LLN. Overall, most students could state the rule with a high degree of accuracy, but failed to apply it consistently. The vast majority of college students could memorize a rule; some applied it to examples, but most did not.
Fong and Nisbett (1991) concluded their article with the suggestion that "inferential rule training may be the educational gift that keeps on giving" (p.44). It is likely that their educational approach may be successful for relatively straightforward problems that are in the same general form as the training examples. We suspect, however, that for more complex problems, rule training might be less effective. Students may remember the rule, but fail to understand the relevant implications. In such cases, students may accept the gift, but it will not keep on giving.'

D. Ploger and M. Wilson
J Experimental Psychology: General, 1991,120,2,213-214 (My emphasis)

This criticism is repeated by Reeves and Weisberg (1993):

'G. T. Fong and R. E. Nisbett claimed that human problem solvers use abstract principles to accomplish transfer to novel problems, based on findings that Ss were able to apply the law of large numbers to problems from a different domain from that in which they had been trained. However, the abstract-rules position cannot account for results from other studies of analogical transfer that indicate that the content or domain of a problem is important both for retrieving previously learned analogs (e.g., K. J. Holyoak and K. Koh, 1987; M. Keane, 1985, 1987; B. H. Ross, 1989) and for mapping base analogs onto target problems (Ross, 1989). It also cannot account for Fong and Nisbett's own findings that different-domain but not same-domain transfer was impaired after a 2-wk delay. It is proposed that the content of problems is more important in problem solving than supposed by Fong and Nisbett.'

L. M. Reeves and R. W. Weisberg
Abstract versus concrete information as the basis for transfer in problem Solving:
Comment on Fong and Nisbett (1991).
Journal of Experimental Psychology General 1993 Mar Vol122(1) 125-128

The above authors concluded their paper:

'Accordingly, we urge caution in development of an abstract-rules approach in analogical problem solving at the expense of domain or exemplar-specific information. Theories in deductive reasoning have been developed that give a more prominent role to problem content (e.g. Evans, 198; Johnson-Laird, 1988; Johnson-Laird & Byrne, 1991) and thus better explain the available data; the evidence suggests that problem solving theories should follow this trend.

Ibid p.127

The key issue is not whether students (or inmates) can learn particular rules, or strategies of behaviour, since such behaviour modification is quite fundamental to training any professional; rather, the issue is how well such rules are in fact applied outside the specific training domain where they are learned, which, writ large, means the specialism within which they belong. This theme runs throughout this paper in different guises. In some places the emphasis is on 'similarity metrics', in others, 'synonymy', 'analyticity' and 'the opacity of the intensional'. Throughout, however, the emphasis is on transfer of skills or training and how the failure of this highlights the fragmentary nature of all skill learning.

This principle is fundamental to the rationale for the system of Sentence Management outlined elsewhere in this series (Longley 1991,1992;1997), as the system is designed to profile behavioural changes which it is unlikely that the inmates, or even those reporting their local observations of behavioural attainment are likely to be able to report upon accurately..

Fong et al .(1990) having reviewed the general neglect of base rate information and overemphasis on case-specific information in parole decision making, went on to train probation officers in the use of the law of large numbers. This training increased probation officers' use of base-rates when making predictions about recidivism, but this is a specialist, context specific skill.

'Consider a probation officer who is reviewing an offender's case and has two primary sources of information at his disposal: The first is a report by another officer who has known the offender for three years; and the second is his own impressions of the offender based on a half-hour interview. According to the law of large numbers, the report would be considered more important than the officer's own report owing to its greater sample size. But research suggests that people will tend to underemphasize the large sample report and overemphasize the interview. Indeed, research on probation decisions suggests that probation officers are subject to exactly such a bias (Gottfredson and Gottfredson; 1988; Lurigio, 1981)'

G. T. Fong, A. J. Lurigio & L. J. Stalans (1990)
Improving Probation Decisions Through Statistical Training
Criminal Justice and behavior 17,3,1990, 370-388

However, it is important to evaluate the work of Nisbett and colleagues in the context of their early work which explicates both fallibility of 'intuitive' human judgement, and the general finding of limited applicability of formal reasoning skills. Their work illustrates the conditions under which formal discipline, or 'cognitive skills' can be effectively inculcated, and which classes of skills are relatively highly resistant to training. Such training generalises most effectively to apposite situations many of which will be professional contexts. A major thesis of this paper is that for skills to be put into effective practice, explicit applications must be made salient to elicit and reinforce those skills. Formal, logical skills are most likely to be applied within contexts such as actuarial analysis or the application of professional skills within the domain of information technology.

Recently, Nisbett and colleagues (1992) in defending their stance against the conventional view that there may in fact be little in the way of formal rule learning, have suggested criteria for resolving the question as to whether or not explicit rule following is fundamental to reasoning, and if so, under what circumstances:

'A number of theoretical positions in psychology - including variants of case-based reasoning, instance-based analogy, and connectionist models - maintain that abstract rules are not involved in human reasoning, or at best play a minor role. Other views hold that the use of abstract rules is a core aspect of human reasoning. We propose eight criteria for determining whether or not people use abstract rules in reasoning, and examine evidence relevant to each criterion for several rule systems. We argue that there is substantial evidence that several different inferential rules, including modus ponens, contractual rules, causal rules, and the law of large numbers, are used in solving everyday problems. We discuss the implications for various theoretical positions and consider hybrid mechanisms that combine aspects of instance and rule models.

E. Smith , C Langston and R Nisbett (1992)
The Case for Rules in Reasoning, Cognitive Science 16, 1-40

This 'teaching for transfer', applies to the use of deductive and actuarial technology (computing and statistics), as with any 'cognitive skill', whether part of inmate programmes, or staff training. For instance, in some of the published studies (e.g. Porporino et al 1991), pre to post course changes (difference scores) in cognitive skills have been presented as evidence for the efficacy. Clearly one must ask whether one is primarily concerned to bring about a change in cognitive skills per se, or a change in other behaviours (such as offending behaviour or employability). In the transfer of training and reasoning studies by Nisbett and colleagues, the issues are acknowledged to be highly dependent on the types of heuristics being induced. The problem is one of generalisation of skills to novel tasks or situations - situations or examples other than the training tasks themselves. To what extent does generalisation in practice occur, if at all?. These issues, and the research in experimental psychology (outside the relatively small area of criminological psychology), are cited below as empirical illustrations of the opacity of the intensional. The conventional view, as Fong and Nisbett (1991) state, is that:

'A great many scholars today are solidly in the concrete, empirical, domain-specific camp established by Thorndike and Woodworth (1901), arguing that people reason without the aid of abstract inferential rules that are independent of the content domain.'

Thus, whilst Nisbett and colleagues have provided evidence on the induction of (statistical) heuristics, they acknowledge that there is a problem attempting to teach formal rules (such as those of the predicate calculus) which are not 'intuitively obvious'. This issue is therefore at the heart of the resourcing of specific educational programmes, which are 'cognitively' based, and which adhere to the conventional 'formal discipline' notion. Such investment must be compared with investment in the other programmes which can be used to monitor and shape behaviour under relatively natural or generalizable conditions. There, the natural demands of the activity are focal, and the 'programme' element rests in apposite allocation along with clear accounts of what the activity area requires/offers in terms of requisite behavioural skills.

There is a logical possibility that in restricting the subject matter of psychology, and thereby the deployment of psychologists, to what can only be analysed and managed from a cognitive perspective, one will render some very significant results of research in psychology irrelevant to applied behaviour science and technology, unless taken as a vindication of the stance that all accounts of behaviour is essentially context specific. As explicated above, intensions are not, in principle, amenable to quantitative analysis. They are, in all likelihood, only domain or context specific. A few further examples should make these points clearer.

There is a logical possibility that in restricting the subject matter of psychology, and thereby the deployment of psychologists, to what can only be analysed and managed from a methodological solipsistic (cognitive) perspective, one will render some very significant results of research in psychology irrelevant to applied behaviour science and technology, unless taken as a vindication of the stance that behaviour is essentially context specific. As explicated above, intensions are not, in principle, amenable to quantitative analysis. They are, in all likelihood, only domain or context specific fragments of behaviour. Since this is the title of this paper, a more concrete illustration may be apposite at this point:

When one begins operant conditioning work with rats, one has to get the animals to notice where the food pellets they are going to bar press for are going to be delivered from. This is often referred to as 'magazine training' because the little food pellets are dispensed one at a time from a magazine. After a few deliveries, the rat quite happily munches away after each pellet pops down the food chute. The next task is to get it to go near the lever, so one watches for the rat to go towards the lever, and as soon as it moves in the right direction, one can press a button to deliver a food pellet. As the rat moves closer and closer one ceases to deliver pellets when it is at a relatively remote site, and only reinforces behaviour which brings the rat almost on to the lever. Finally, the rat brushes against, or falls upon the lever, and the mechanism of lever press - pellet delivery takes over. Then, the rat learns to 'press the lever'. What is often not fully appreciated, is the range of behavioural fragments which are emitted and learned. Each approximation that is learned is a contingency, or production rule IF p THEN f. The rat is not repeating the 'same' behaviour and having the 'same' response reinforced, since each operant is a unique piece of behaviour in space and time. The set of responses must be treated as to be a class of fragments of behaviour, out of which emerges a configured set of rules:

IF such_and_such_behaviour THEN food_pops_out_at_X
IF so_and_so_behaviour THEN NOT food_pops_out_at_X.

'Pressing the lever' per se is an abstraction which the trainer makes. What good behaviour analysis will record is the contingent events, and from such records it is very easy to see how the cognitive attributions which comprise the intensional stance are generated. However, the rat learns many fragments of behaviour in such contexts, and progressively some are selectively reinforced and others not (they are extinguished). In fact once one gets the animals to repeat the required behaviour often enough it becomes stereotyped (mechanical). The longer the animal is trained, the better it is able to stop when food is no longer contingent (the overtraining extinction effect). The amount of lever pressing in extinction (when the food is no longer contingent upon lever press) can be shown to be a function of how much training the animal gets during acquisition. One could say that the rat progressively 'homes-in' on the required invariant class of behaviours by irrelevant behaviours dropping out.

In standard classical conditioning paradigms, this is referred to as 'configuring', and in a slightly different guise, 'blocking' (in either case some, unpredictive, ie irrelevent, elements of the behavioural repertoire drop out). Stripped of the 'Rattus-Norvegicus-falsificationist' talk, what the animal does is emit a class of behaviours which can be 'construed' cognitively albeit, from the teachers point of view, but which are probably best treated theoretically as a class of behaviours which can be shaped up to the required behaviour through differential feedback. What is important is practice, so that the 'effective' strategies can be configured. To talk about 'understanding' being necessary apart from this configuring of context specific fragments of behaviour may well just be an intrusion of folk psychological language into science, representing little more than a failure of observers to appreciate the subtlety of detailed behaviour analysis.

Cognitive Psychologists have studied 'Deductive Inference' from the perspective of 'psychologism', a doctrine, which, loosely put, equates the principles of logic with those of thinking. Whilst the work of Church (1936), Post (1936) and Turing (1937) clearly established that the principles of 'effective' computation are not psychological, and can be mechanically implemented, researchers such as Johnson-Laird and Byrne (1992) have still considered it a worthwhile objective to develop 'mental models' providing a descriptive and predictive account of the natural processes which seem characteristic of human reasoning, including the difficulties and errors observed in human deduction (Wason 1966). Prior to the 1970s, formal logical models were de rigeur for the majority of the models of memory, attention and other cognitive processes. However, throughout the 1970s, substantial empirical evidence began to accumulate to refute this functionalist (Putnam 1967) thesis that human cognitive processes were formal and computational. Even well educated subjects it seems, have considerable difficulty with relatively simple deductive Wason Selection tasks such as the following:

Where the task is to test the rule "if a card has a vowel on one side it has an even number on the other". Or in the following:

where subjects are asked to test the rule: 'each card that has an A on one side will have a 3 on the other'. In both problems they can only turn over a maximum of two cards to ascertain the truth of the rule. Similarly, the majority have difficulty with the following problem, where the task is to reveal up to two hidden halves of the cards to ascertain the truth or falsehood of the rule 'whenever there is a O on the left there is a O on the right':

 

  Yet conventional, Von-Neumann based computer technology has no difficulty with instantiations of deductive inference rules, e.g modus tollens, such as the Resolution Method. The solutions to each problem require the valid application of the material conditional. Problem one is falsified by turning cards A and 9, problem two is solved by turning cards A and 7, and problem three is solved by turning cards (A) and (D). Even logicians, and others trained in the formal rules of deductive logic often fail to solve such problems:

'Time after time our subjects fall into error. Even some professional logicians have been known to err in an embarrassing fashion, and only the rare individual takes us by surprise and gets it right. It is impossible to predict who he will be. This is all very puzzling....'

P. C. Wason and P. N. Johnson-Laird (1972)
Psychology of Reasoning

Whilst there is evidence that some subjects have less difficulty with such problems if they are presented in more familiar contexts (Johnson-Laird, Legrenzi & Sonino Legrenzi (1972), the replicability of such findings are in doubt (Griggs 1981). Furthermore, there is impressive empirical evidence that formal training in logic does help the solution of such problems (Nisbett et al. 1987). Why is this so if human reasoning is, as the cognitivists have claimed, essentially logical and computational? The answer, perhaps, is that those who make such claims have failed to appreciate the full implications of the nature of normative models of rationality in contrast to descriptive records of behaviour. Prima facie evidence that under some conditions people can behave rationally should not be regarded ipso facto that they naturally behave rationally any more than the demonstration of any other context specific skills should be taken a priori that such skills are natural and generally transferable. Such assumptions are often made only because of the absence of sufficient observations.

Wason (1966) also provided subjects with numbers which increased in series, asking them to identify the rule. In most cases, the simple fact that all examples shared no more than simple progression was skipped, and whatever hypotheses they created were held onto even though the actual rule was subsequently made clear. This persistence of belief, and rationalisation of errors despite debriefing and exposure to contrary evidence, is well documented in psychology, and is a phenomenon which methodologically is, as Popper makes clear, at odds with the formal, normative advancement of knowledge. Here is what Sir Karl Popper (1965) had to say:

'My study of the CONTENT of a theory (or of any statement whatsoever) was based on the simple and obvious idea that the informative content of the CONJUNCTION, ab, of any two statements, a, and b, will always be greater than, or at least equal to, that of its components.
'Let a be the statement 'It will rain on Friday'; b the statement 'It will be fine on Saturday'; and ab the statement 'It will rain on Friday and it will be fine on Saturday': it is then obvious that the informative content of this last statement, the conjunction ab, will exceed that of its component a and also that of its component b. And it will also be obvious that the probability of ab (or, what is the same, the probability that ab will be true) will be smaller than that of either of its components. Writing Ct(a) for 'the content of the statement a', and Ct(ab) for 'the content of the conjunction a and b', we have
(1) Ct(a) <' Ct(ab) '' Ct(b)
This contrasts with the corresponding law of the calculus of probability,
(2) p(a) '' p(ab) <' p(b)
where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and VICE VERSA; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical CONTENT of a statement as the class of ALL THOSE STATEMENTS WHICH ARE LOGICALLY ENTAILED by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b - that is to say, if it entails more than b.)
'This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: THESE TWO AIMS ARE INCOMPATIBLE.
'I found this trivial though fundamental result about thirty years ago, and I have been preaching it ever since. Yet the prejudice that a high probability must be something highly desirable is so deeply ingrained that my trivial result is still held by many to be 'paradoxical'.

K. Popper (1963)
Truth, Rationality, and the Growth of Knowledge
Ch. 10, p 217-8 CONJECTURES AND REFUTATIONS

Modus tollens and the extensional principle that a compound event can only be less probable (or equal) to its component events independently, is fundamental to the logic of scientific discovery, and yet this, along with other principles of extensionality seem to be principles which are in considerable conflict with natural intuitive judgment, as Kahneman and Tversky (1983) demonstrated with their illustration of the 'Linda Problem'. In conclusion, the above authors wrote, twenty years after Wason's experiments on deductive reasoning and Popper's (1963) 'Conjectures and Refutations':

'In contrast to formal theories of belief, intuitive judgments of probability are generally not extensional. People do not normally analyse daily events into exhaustive lists of possibilities or evaluate compound probabilities by aggregating elementary ones. Instead, they use a limited number of heuristics, such as representativeness and availability (Kahneman et al. 1982). Our conception of judgmental heuristics is based on NATURAL ASSESSMENTS that are routinely carried out as part of the perception of events and the comprehension of messages. Such natural assessments include computations of similarity and representativeness, attributions of causality, and evaluations of the availability of associations and exemplars. These assessments, we propose, are performed even in the absence of a specific task set, although their results are used to meet task demands as they arise. For example, the mere mention of "horror movies" activates instances of horror movies and evokes an assessment of their availability. Similarly, the statement that Woody Allen's aunt had hoped that he would be a dentist elicits a comparison of the character to the stereotype and an assessment of representativeness. It is presumably the mismatch between Woody Allen's personality and our stereotype of a dentist that makes the thought mildly amusing.. Although these assessments are not tied to the estimation of frequency or probability, they are likely to play a dominant role when such judgments are required. The availability of horror movies may be used to answer the question "What proportion of the movies produced last year were horror movies?", and representativeness may control the judgement that a particular boy is more likely to be an actor than a dentist.
The term JUDGMENTAL HEURISTIC refers to a strategy - whether deliberate or not - that relies a natural assessment to produce an estimation or a prediction.
Previous discussions or errors of judgement have focused on deliberate strategies and on misinterpretations of tasks. The present treatment calls special attention to the processes of anchoring and assimilation, which are often neither deliberate nor conscious. An example from perception may be instructive: If two objects in a picture of a three-dimensional scene have the same picture size, the one that appears more distant is not only seen as "really" larger but also larger in the picture. The natural computation of real size evidently influences the (less natural) judgement of picture size, although observers are unlikely to confuse the two values or to use the former to estimate the latter.
The natural assessments of representativeness and availability do not conform to the extensional logic of probability theory.'

A. Tversky and D. Kahneman
Extensional Versus Intuitive Reasoning:
The Conjunction Fallacy in Probability Judgment.
Psychological Review Vol 90(4) 1983 p.294

The study of Natural Deduction (Gentzen 1935;Prawitz 1971; Tenant 1990) as a psychological process is just the study of the performance of a skill (like riding a bicycle), endeavouring to provide an account of the empirically observed difficulties in the practice of deduction. This is the task of psychology in general, to describe and account for natural behaviour - an enterprise which generally makes advences through identifying the natural constraints which operate on behaviour. The best models here may turn out to be connectionist, where each individual's model ends up being almost unique in its fine detail (see also Quine 1960).

There is a revealing problem for performance theories, as Johnson Laird and Byrne (1991) point out:

'A major difficulty for performance theories based on formal logic is that people are affected by the content of a deductive system..yet formal rules ought to apply regardless of content. That is what they are: rules that apply to the logical form of assertions, once it has been abstracted from their content.'

P. N. Johnson-Laird and R. M. J. Byrne (1991)
Deduction p.31

The theme of the paper up to this point has been that methodological solipsism is unlikely to reveal much more than the shortcomings and diversity of social and personal judgment and the context specificity of behaviour. An exercise which has value anthropologically, but which is of dubious utility in an applied behavioural technology. It took until 1879 for Frege to discover the Predicate Calculus (Quantification Theory), and a further half century before Church (1936), Turing (1937) and others laid the foundations for computer science through their collective work on recursive function theory. From empirical evidence, and developments in technology, it looks like natural human and other animal reasoning is primarily inductive and heuristic, not deductive and algorithmic. Human beings have considerable difficulties with the latter, and this must be acknowledged to be normal (in whatever population). In fact, it has taken considerable research to discover formal, abstract, extensional principles, often only with the support of logic, mathematics and computer technology itself. The empirical evidence, reviewed in this paper is that extensional principles are not widely applied except in specific professional capacities which are domain-specific. In fact, on the simple grounds that the discovery of such principles required considerable effort should perhaps make us more ready to accept that they are unlikely to be spontaneously applied in everyday reasoning and problem solving.

For further coverage of the 'counter-intuitive' nature of deductive reasoning (and therefore its low frequency in everyday practice) see Sutherland's 1992 popular survey 'Irrationality', or Plous (1993) for a recent review of the psychology of judgment and decision making. For a thorough survey of the rise (and possibly the fall) of Cognitive Science, see Putnam 1986, or Gardner 1987. The latter concluded his survey of the Cognitive Revolution within psychology with a short statement which he referred to as the 'computational paradox'. One thing that Cognitive Psychology has shown us is that the computer or Turing Machine is not a good model of how people reason, at least not in the Von-Neumann Serial processing sense. Similarly, people do not seem to naturally think in accordance with the axioms of formal, extensional logic. Instead, they learn rough and ready heuristics which they which they try to apply to problems in a very rough, approximate way. Accordingly, Cognitive Science may well turn to the work of Church, Turing and other mathematical logicians who, in the wake of Frege, have worked to elaborate what effective processing is. We will then be faced with the strange situation of human psychology being of little practical interest, except as a historical curiosity, an example of pre-Fregian logic and pre-Church (1936) computation. Behaviour science will pay as little attention to the 'thoughts and feelings' of 'folk psychology' as contemporary physics does to quaint notions of 'folk physics'. For some time, experimental psychologists working within the information processing (computational) tradition have striven to replace concepts such as 'general reasoning capacity' with more mechanistic notions such as 'Working Memory' (Baddeley 1986):

'This series of studies was concerned with determining the relationship between general reasoning ability (R) and general working-memory capacity (WM). In four studies, with over 2000 subjects, using a variety of tests to measure reasoning ability and working-memory capacity, we have demonstrated a consistent and remarkably high correlation between the two factors. Our best estimates of the correlation between WM and R were .82, .88., .80 and .82 for studies 1 through 4 respectively.
...
The finding of such a high correlation between these two factors may surprise some. Reasoning and working-memory capacity are thought of differently and they arise from quite different traditions. Since Spearman (1923), reasoning has been described as an abstract, high level process, eluding precise definition. Development of good tests of reasoning ability has been almost an art form, owing more to empirical trial-and-error than to a systematic delineation of the requirements such tests must satisfy. In contrast, working memory has its roots in the mechanistic, buffer-storage model of information processing. Compared to reasoning, short-term storage has been thought to be a more tractable, demarcated process.'

P. C. Kyllonen & R. E. Christal (1990)
Reasoning Ability Is (Little More Than) Working-Memory Capacity
Intelligence 14, 389-433

Such evidence stands well with the logical arguments of Cherniak which were introduced in Section A, and which are implicit in the following introductory remarks of Shinghal (1992) on automated reasoning:

'Suppose we are given the following four statements:
1. John awakens;
2. John brings a mop;
3. Mother is delighted, if John awakens and cleans his room;
4. If John brings a mop, then he cleans his room.
The statements being true, we can reason intuitively to conclude that Mother is delighted. Thus we have deduced a fact that was not explicitly given in the four statements. But if we were given many statements, say a hundred, then intuitive reasoning would be difficult. Hence we wish to automate reasoning by formalizing it and implementing it on a computer. It is then usually called automated theorem proving. To understand computer-implementable procedures for theorem proving, one should first understand propositional and predicate logics, for those logics form the basis of the theorem proving procedures. It is assumed that you are familiar with these logics.'

R. Shinghal (1992)
Formal Concepts in Artificial Intelligence: Fundamentals
Ch.2 Automated Reasoning with Propositional Logic p.8

In contrast to the automated report writing and automated deduction operating on actuarial data which is fundamental to the PROBE/Sentence Management project presented in "A System Specification for PROfiling Behaviour" (Longley 1994), Gluck and Bower (1988) have modelled human inductive reasoning using artificial neural network technology (heuristics operating on constraint satisfaction, approximation, or 'best fit' principles rather than 'production rules'). It is unlikely that many humans spontaneously 'reason' using truth-tables or the Resolution Rule (Robinson 1965).

Rescorla (1988), perhaps the dominant US spokesman for research in Pavlovian Conditioning, has drawn attention to the fact that Classical Conditioning should perhaps be seen as the experimental modelling of inductive inferential 'cognitive' heuristic processes. Throughout this paper, it is being argued that such inductive inferences are in fact best modelled using artificial neural network technology, and that such processing is intensional, with all of the documented problems of intensionality:

'Connectionist networks are well suited to everyday common sense reasoning. Their ability to simultaneously satisfy soft constraints allows them to select from conflicting information in finding a plausible interpretation of a situation. However, these networks are poor at reasoning using the standard semantics of classical logic, based on truth in all possible models.'

M. Derthick (1990)
Mundane Reasoning by Settling on a Plausible Model
Artificial Intelligence 46,1990,107-157

and perhaps even more familiarly:

'Induction should come with a government health warning.
A baby girl of sixteen months hears the word 'snow' used to refer to snow. Over the next months, as Melissa Bowerman has observed, the infant uses the word to refer to: snow, the white tail of a horse, the white part of a toy boat, a white flannel bed pad, and a puddle of milk on the floor. She is forming the impression that 'snow' refers to things that are white or to horizontal areas of whiteness, and she will gradually refine her concept so that it tallies with the adult one. The underlying procedure is again inductive.'

P. N. Johnson-Laird (1988)
Induction, Concepts and Probability p.238: The Computer and The Mind

3. Connectionism, Parallel Distributed Processing & Conditioning: The Modus Vivendi of Folk Psychology and 'Going Beyond The Information Given'

As briefly mentioned at the end of the last section, there has recently been a resurgence of interest in models of inductive inference (or conditioning), in the guise of 'connectionist systems' or 'Artificial Neural Networks'. These are modelled on how neurones or binary switches operate when their individual properties are assembled into ensembles or nets.

Early work on the integrative functions of the nervous system by Herrick and Sherrington at the end of the last century paved the way towards the physiological study of the adaptive plasticity of segmented species, culminating in the classic studies of Pavlov in the early decades of this century. Simple models in the late 1940s (Hebb 1949) in conjunction with early work in computing spawned a number of papers in the late 40s and 50s (Rosenblatt 1959, Woodrow and Hoff) modelling the logical and mathematical properties of connected neurones in 'layers'. The notion of layer here is a topological abstraction, but can profitably be conceived as instantiated in the basic afferent - efferent components of the dorsal and ventral horn of the spinal CNS serving as input-output layers, interneurones between the afferent and efferent representing 'hidden' layers.

Neurones do not quite function as formal logical switching elements or, as ensembles as logical well formed formulae as instantiated in effective procedures of the predicate calculus, but as functions of connection weights between activated predicates or vectors in a layered parallel distributed network:

'Lawful behavior and judgments may be produced by a mechanism in which there is no explicit representation of the rule. Instead, we suggest that the mechanisms that process language and make judgments of grammaticality are constructed in such a way that their performance is characterizable by rules, but that the rules themselves are not written in explicit form anywhere in the mechanism.'

D E Rumelhart and D McClelland (1986)
Parallel Distributed Processing Ch. 18

These are function-approximation systems, mathematical developments of Kolmogorov's Mapping Neural Network Existence Theorem (1957). They generally consist of three layers of processing elements. An input or bottom layer distributes the input vector (a pattern of 1s and 0s representing features of the environment) to the processing elements of the second or 'hidden' layer. This in turn implements a 'transfer function' to the top layer comprising output or classification units. An important feature of Kolmogorov's Theorem is that it is not constructive (it is not algorithmic, or 'effective'). Since the proof of the theorem is not constructive, we do not know how to determine the key quantities of the transfer functions. The theorem simply tells us that such a three layer mapping network must exist. As Hecht-Nielsen (1990) remarks:

'Unfortunately, there does not appear to be too much hope that a method of finding the Kolmogorov network will be developed soon. Thus, the value of this result is its intellectual assurance that continuous vector mappings of a vector variable on the unit cube (actually, the theorem can be extended to apply to any COMPACT, ie, closed and bounded, set) can be implemented EXACTLY with a three-layer neural network.'

R. Hecht-Nielsen (1990)
Kolmogorov's Theorem
Neurocomputing

That is, we may well be able to find weight-matrices which capture or embody certain functions, but we may not be able to say 'effectively' what the precise equations are which algorithmically compute such functions. This is often summarised by statements to the effect that neural networks can model or fit solutions to sample problems, and generalise to new cases, but they can not provide a rule as to how they make such classifications or inferences. Their ability to do so is distributed across the weightings of the whole weight matrix of connections between the three layers of the network. This is to be contrasted with the formal fitting of linear discriminant functions to partition or classify an N dimensional space - where N is a direct function of the number of classes or predicates. Fisher's discriminant analysis (and the closely related linear multiple regression technology) arrive at the discriminant function coefficients through the Gaussian method of Least Mean Squares or similar algorithmic step. Each b value, or regression weight is arrived at deductively via the solution of simple simultaneous equations.

Function approximation, or the determination of hidden layer weights or connections on the other hand is, like operant conditioning, based on recursive feedback - elsewhere within behaviour science, known as 'reinforcement' - the differential strengthening or weakening of connections depending on feedback or knowledge of results. It is likely that such processes are basis to the building of all adaptive behaviour during ontological development, a factor which confers flexibility, fault tolerance and permits some degree of recovery of function when the system is damaged. Kohonen (1988) commenting on "Connectionist Models" in contrast to conventional, extensionalist relational databases, writes:

'Let me make it completely clear that one of the most central functions coveted by the "connectionist" models is the ability to solve *simplicitly defined relational structures*. The latter, as explained in Sect. 1.4.5, are defined by *partial relations*, from which the structures are determined in a very much similar way as solutions to systems of algebraic equations are formed; all the values in the universe of variables which satisfy the conditions expressed as the equations comprise, by definition, the possible solutions. In the relational structures, the knowledge (partial statements, partial relations) stored in memory constitutes the universe of variables, from which the solutions must be sought; and the conditions expressed by (eventually incomplete) relations, ie, the "control structure" [9.20] correspond to the equations. Contrary to the conventional database machines which also have been designed to handle such relational structures, the "connectionist" models are said to take the relations, or actually their strengths into account statistically. In so doing, however they only apply the Euclidean metric, or the least square loss function to optimize the solution. This is not a very good assumption for natural data.'

T. Kohonen (1988)
Ch. 9 Notes on Neural Computing
In Self-Organisation and Associative Memory

That is, the neural network enthusiast should be wary of the fact that such classifiers may not produce normatively defensible, ie 'good' fits to real world data - though as we will see, from a modelling perspective, this may be all that the empirical psychologist should be seeking - a model of what animals actually do.

Throughout the 1970s, working primarily in Social Psychology, Nisbett and colleagues studied the use of probabilistic heuristics in real world human problem solving, primarily in the context of Attribution Theory (H. Kelley 1967, 1972). Such inductive as opposed to deductive heuristics of inference do indeed seem to be influenced by training (Nisbett and Krantz 1983, Nisbett et. al 1987). Statistical heuristics are naturally applied in everyday reasoning if subjects are trained in the 'Law of Large Numbers'. This is not surprising, since application of such heuristics is an example of response generalisation - which is how psychologists have traditionally studied the vicissitudes of inductive inference within Learning Theory. However, we are perfectly at liberty to use the language of Attribution Theory as an alternative. This exchangeability of reference system may well be an instance of Quinean Ontological Relativity, where what matters is not so much the names in argument positions, or even the predicates themselves, but the relations which emerge from such systems.

Under most natural circumstances, inductive inference is rationally unjustifiable (cf. Popper 1936, Kahneman et al. 1982, Dawes, Faust and Meehl 1989, Sutherland 1992). This is not only because it is generally characterised by unrepresentative sampling (drawing on the 'availability' and 'representativeness' heuristics), but because there just is no rational basis for concluding that because something has happened in the the past, it will continue to happen in the future. In later sections research will be reviewed which has demonstrated that human inference is seriously at odds with formal deductive reasoning, and the algorithmic implementation of such procedures in computers (Church 1936, Post 1936, Turing 1936). [unedited from here on]

One of the objectives of this paper is to alert the reader to the extent to which we generally turn to the formal deductive technology of mathematico-logical method (science) to compensate for the heuristics and biases which typically characterise natural inductive inference. It is not only that we turn to *relational databases and analystical software wherever possible to provide descriptive, and deductively reliable pictures of individuals and collectives, but we turn to more familiar and ostensibly simple features of the extensional stance to establish order in our lives (clocks being but one simple example).

A large, and unexpected body of empirical evidence from decision-theory, cognitive experimental social psychology and Learning Theory, began accumulating in the mid to late 1970s (cf. Kahneman, Tversky and Slovic 1982, Putnam 1986, Stich 1990), and began to cast serious doubt on the viability of the 'computational theory' of mind (Fodor 1975,1980) which was basic to functionalism (Putnam 1986). That is, the substantial body of empirical evidence which accumulated within Cognitive Psychology itself suggested that, contrary to the doctrine of functionalism, there exists a system of independent, objective knowledge, and reasoning against which we can judge human, and other animal cognitive processing. However, it gradually became appreciated that the digital computer is not a good model of human information processing, at least not unless this is conceived in terms of 'neural computing' (also known as 'connectionism' or 'Parallel Distributed Processing). The application of formal rules of logic and mathematics to the analysis of behaviour solely within the language of formal logic is the professional business of applied behaviour scientists. Outside of the practice of those professional skills, the scientist himself is as prone to the irrationality of intensional heuristics as are laymen (Wason 1966). Within the domain of formal logic applied to the analysis of behaviour, the work undertaken by applied scientists is impersonal. The scientists' professional views are dictated by the laws of logic and mathematics rather than personal opinion.

Applied psychologists, particularly those working in the area of Criminological Psychology, are therefore faced with a dilemma. Whilst many of their academic colleagues are *studying* the heuristics and biases of human cognitive processing, the applied psychologist is generally called upon to do something quite different, yet is largely prevented from doing so for lack of relational systems to provide the requisite distributional data upon which to use the technology of algorithmic decision making. In the main, the applied criminological psychologists as behaviour scientist is called upon to bring about behaviour change, rather than to better understand or explicate the natural heuristics of cognitive (clinical) judgement. To the applied psychologist, the low correlation between self-report and actual behaviour, the low consistency of behaviour across situations, the low efficacy of prediction of behaviours such as 'dangerousness' on the basis of clinical judgment, and the fallibility of assessments based on interviews, are all testament to the now well documented unreliability of intensional heuristics (cognitive processes) as data sources, and we have already pointed to why this is so. Yet generally, psychologists can rely on no other sources, as there are in fact, inadequate Inmate Information Systems. Thus, whilst applied psychologists know from research that they must rely on distributional data to establish their professional knowledge base, and that they must base their work with individuals (whether prisoners, governors or managers) on extensional analysis of such knowledge bases, they neither have the systems available nor the influence to have such systems established, despite powerful scientific evidence (Dawes, Faust and Meehl 1989) that their professional services in many areas depend on the existence and use of such systems. What applied psychologists have learned therefore is to eschew intensional heuristics and look instead to the formal technology of extensional analysis of observations of behaviour. The fact that training in formal statistical and deductive logic is difficult, particularly the latter, makes this a challenge, since most of the required skills are only likely to be applicable when sitting in front of a computer keyboard (Holland et al 1986). It is particularly challenging in that the information systems are generally inadequate to allow professionals to do what they are trained to do.

Over the seven years (1987-1994), a programme was developed which was explicitly naturalistic in that it sought to record inmate/environment (regime) interactions. This system was the PROBE/Sentence Management system - detailed in this series as the 12 volume 'A System Specification for PROfiling Behaviour' (Longley 1994). It breaks out of solipsism by making all assessments of behaviour, and all inmate targets relative to predetermined requirements of the routines and structured activities defined under function 17 of the annual Governors Contract. It is by design a 'formative profiling system' which is 'criterion reference' based.

The alternative, intensional heuristics, which are the mark of natural human judgement (hence our rich folk psychological vocabulary of metaphor and narrative) have to be contrasted with extensional analysis and judgement using technology based on the deductive algorithms of the First Order Predicate Calculus (Relational Database Technology). This is not only coextensive with the 'scope and language of science' (Quine 1954) but is also, to the best of our knowledge from research in Cognitive Psychology, an effective compensatory system to the biases of natural intensional, inductive heuristics (Agnoli and Krantz 1989). Whilst a considerable amount of evidence suggests that training in formal logic and statistics is not in itself sufficient to suppress usage of intensional heuristics in any enduring sense, ie that generalisation to extra-training contexts is limited, there is evidence that judgement can be rendered more rational by training in the use of extensional technology. The demonstration by Kahneman and Tversky 1983, that subjects generally fail to apply the extensional conjunction rule in probability that conjunctions are always equal or less probable than its elements, and that this too is generally resistant to counter-training, is another example, this time within probability theory (a deductive system) of the failure of extensional rules in applied contexts. Careful use of I.T. and principles of deductive inference (e.g. semantic tableaux, Herbrand models, and Resolution methods) promise, within the limits imposed by Godel's Theorem, to keep us on track if we restrict our technology to the extensional.

4 Methodological (Evidential) Behaviourism The Perspective From 'The Extensional Stance

'Suppose that each line of the truth table for the conjunction of all [of a person's] beliefs could be checked in the time a light ray takes to traverse the diameter of a proton, an approximate "supercycle" time, and suppose that the computer was permitted to run for twenty billion years, the estimated time from the "big-bang dawn of the universe to the present. A belief system containing only 138 logically independent propositions would overwhelm the time resources of this supermachine.'

C. Cherniak (1986)
Minimal Rationality p.93
'Cherniak goes on to note that, while it is not easy to estimate the number of atomic propositions in a typical human belief system, the number must be vastly in excess of 138. It follows that, whatever its practical benefits might be, the proposed consistency-checking algorithm is not something a human brain could even approach. Thus, it would seem perverse, to put it mildly, to insist that a person's cognitive system is doing a bad job of reasoning because it fails to periodically execute the algorithm and check on the consistency of the person's beliefs.'

S. Stich (1990)
The Fragmentation of Reason p.152
'I should like to see a new conceptual apparatus of a logically and behaviourally straightforward kind by which to formulate, for scientific purposes, the sort of psychological information that is conveyed nowadays by idioms of propositional attitude.'

W V O Quine (1978)

In the extract from Cherniak, the point being made is that as the number of discrete propositions increase, the possible combinations increases dramatically, or, as Shafir and Tversky 1992 say:

'Uncertain situations may be thought of as disjunctions of possible states: either one state will obtain, or another....Shortcomings in reasoning have typically been attributed to quantitative limitations of human beings as processors of information. "Hard problems" are typically characterized by reference to the "amount of knowledge required," the "memory load," or the "size of the search space"....Such limitations, however, are not sufficient to account for all that is difficult about thinking. In contrast to many complicated tasks that people perform with relative ease, the problems investigated in this paper are computationally very simple, involving a single disjunction of two well defined states. The present studies highlight the discrepancy between logical complexity on the one hand and psychological difficulty on the other. In contrast to the "frame problem" for example, which is trivial for people but exceedingly difficult for AI, the task of thinking through disjunctions is trivial for AI (which routinely implements "tree search" and "path finding" algorithms) but very difficult for people. The failure to reason consequentially may constitute a fundamental difference between natural and artificial intelligence.'

E. Shafir and A. Tversky (1992)
Thinking through Uncertainty: Nonconsequantial Reasoning and Choice
Cognitive Psychology 24,449-474

From a pattern recognition or classification stance, it is known that as the number of predicates increase, the number of linearly separable functions becomes proportionately smaller as is made clear by the following extract from Wasserman (1989) when discussing the concept of linear separability:

'We have seen that there is no way to draw a straight line subdividing the x-y plane so that the exclusive-or function is represented. Unfortunately, this is not an isolated example; there exists a large class of functions that cannot be represented by a single-layer network. These functions are said to be linearly inseparable, and they set definite bounds on the capabilities of single-layer networks.
Linear separability limits single-layer networks to classification problems in which the sets of points (corresponding to input values) can be separated geometrically. For our two-input case, the separator is a straight line. For three inputs, the separation is performed by a flat plane cutting through the resulting three-dimensional space. For four or more inputs, visualisation breaks down and we must mentally generalise to a space of n dimensions divided by a "hyperplane", a geometrical object that subdivides a space of four or more dimensions.... A neuron with n binary inputs can have 2 exp n different input patterns, consisting of ones and zeros. Because each input pattern can produce two different binary outputs, one and zero, there are 2 exp 2 exp n different functions of n variables.
As shown [below], the probability of any randomly selected function being linearly separable becomes vanishingly small with even a modest number of variables. For this reason single-layer perceptrons are, in practice, limited to simple problems.
n 2 exp 2 exp n Number of Linearly Separable Functions
1 4
2 16 14
3 256 104
4 65,536 1,882
5 4.3 x 10 exp 9 94,572
6 1.8 x 10 exp 19 5,028,134

P. D. Wasserman (1989)
Linear Separability: Ch2. Neural Computing Theory and Practice

In later sections evidence is presented in the context of clinical vs. actuarial judgment that human judgement is severely limited to processing only a few variables. Beyond that, non- linear fits become more frequent. This is discussed later in the context of connectionist 'intuitive', inductive inference and constraints on short-term or working memory span (c.f. Kyllonen & Christal 1990 - "Reasoning Ability Is (Little More Than) Working-Memory Capacity?!"), but it is worth mentioning here that in the epilogue to their expanded re-print of their 1969 review of neural nets 'Perceptrons - An Introduction to Computational Geometry', after reiterating their original criticism that neural networks had only been shown to be capable of solving 'toy problems', ie problems with a small number of dimensions, using 'hill climbing' algorithms, Minsky and Papert (1988) effectively did a 'volte face' and said:

'But now we propose a somewhat shocking alternative: Perhaps the scale of the toy problem is that on which, in physiological actuality, much of the functioning of intelligence operates. Accepting this thesis leads into a way of thinking very different from that of the connectionist movement. We have used the phrase "society of mind" to refer to the idea that mind is made up of a large number of components, or "agents," each of which would operate on the scale of what, if taken in isolation, would be little more than a toy problem.'

M Minsky and S Papert (1988) p266-7

and a little latter, which is very germane to the fragmentation of behaviour which in turn demands our adoption of the extensional stance:

'On the darker side, they [parallel distributed networks] can limit large-scale growth because what any distributed network learns is likely to be quite opaque to other networks connected to it.'

ibid p.274

This opacity of aspects, or elements, of our own behaviour to ourselves is central to the theme being developed in this series of papers, namely that a science of behaviour must remain entirely extensional and that there can not therefore be a science or technology of psychology per se, to the extent that this remains intensional (Quine 1960,1992). The discrepancy between experts' reports of the information they use when making diagnoses (judgments) is reviewed in more detail in a later section, however, research reviewed in Goldberg 1968, suggests that even where diagnosticians are convinced that they use more than additive models (ie use interactions between variables - which statistically may account for some of the non-linearities), empirical evidence shows that in fact they only use a few linear combinations of variables (cf. Nisbett and Wilson 1977, in this context). As an illustration of methodological solipsism (intensionalism) in practice consider the following which neatly contrasts subtle difference between the methodological solipsist approach and that of the methodological or 'evidential' behaviourist.

Several years ago, a prison psychologist sought the views of prison officers and governors as to who they considered to be 'subversives'. Those considered 'subversive' were flagged 1, those not considered subversive were flagged 0. The psychologist then used multiple regression to predict this classification from a number of other behavioural variables. From this he was able to produce an equation which predicted subversiveness as a function of 4 variables: whether or not the inmate had a firearms offence history, the number of reports up to arrival at the current prison, the number of moves up to arrival where the inmate had stayed more than 28 days, and the number of inmate assaults up to arrival.

Note that the dependent variable was binary, the inmate being classified as 'subversive' or 'not subversive'. The prediction equation, which differentially weighted the 4 variables, therefore predicted the dependent variable as a value between 0 and 1. Now the important thing to notice here is that the behavioural variables were being used to predict something which is essentially a propositional attitude, ie the degree of certainty of the officers beliefs that certain inmates were subversive. The methodological solipsist may well hold that the officer's beliefs are what are important, however, the methodological behaviourist would hold that what the officers thought was just an approximation of what the actual measures of inmate behaviour represented, ie his thoughts were just vague, descriptive terms for inmates who had lots of reports, assaulted inmates and had been moved through lots of prisons, and were probably in prison for violent offences. What the officers thought was not perhaps, all that important, since we could just go to the records and identify behaviours which are characteristic of troublesome behaviour and then identify inmates as a function of those measures (cf. Williams and Longley 1986).

In the one case the concern is likely to be with developing better and better predictors of what staff THINK, and in the other, it becomes a matter of simply recording better measures of classes of behaviour and empirically establishing functional relations between those classes. In the case of the former, intensional stance, one becomes interested in the psychology of those exposed to such factors (ie those exposed to the behaviour of inmates, and what they vaguely or intuitively describe it as). From the extensional stance (methodological behaviourist) defended in these volumes, such judgments can only be a function of the data that staff have had access to. From the extensional stance, one is simply interested in recording behaviour itself and deducing implicit relations. Ryle (1949) and many influential behaviourists since (Quine 1960), have, along with Hahn (1933) suggested that this is our intellectual limit anyway:

'It is being maintained throughout this book that when we characterize people by mental predicates, we are not making untestable inferences to any ghostly processes occurring in streams of consciousness which we are debarred from visiting; we are describing the ways in which those people conduct parts of their predominantly public behaviour.'

G. Ryle
The Concept of Mind (1949)

Using regression technology as outlined above is essentially how artificial neural network software is used to make classifications, in fact, there is now substantial evidence to suggest that the two technologies are basically one and the same (Stone 1986), except that in neural network technology, the regression variable weights are opaque to the judge, and are arrived at by function approximation cf. Kosko (1992):

'These properties reduce to the single abstract property of adaptive model-free function estimation:Intelligent systems adaptively estimate continuous functions from data without specifying mathematically how outputs depend on inputs...A function f, denoted f: X Y, maps an input domain X to an output range Y. For every element x in the input domain X, the function f uniquely assigns the element y to the output range Y. Functions define causal hypotheses. Science and engineering paint our pictures of the universe with functions.

B. Kosko (1992)
Neural Networks and Fuzzy Systems: A Dynamical Systems
Approach to Machine Intelligence p 19.

The rationale behind Sentence Management as outlined in the paper 'What are Regimes?' (Longley 1992) is that the most effective way to bring about sustained behaviour change is not through specific, formal training programmes, but through a careful strategy of apposite allocation to activities which naturally require behavioural skills which an inmate may be deficient in. This depends on standardised recording of activity and programme behaviour throughout sentence which will provide a historical and actuarial, record of attainment. This will provide differential information to guide management's decisions as how best to help inmates lead a constructive life whilst in custody, and, hopefully, after release. Initially, it will serve to support actuarial analysis of behaviour as a practical working, inmate, and management, information system. In time, it should provide data to enable managers to focus resources where they are most required (ie provide comprehensive regime profiles, which highlight strong and weak elements). Such a system is only interested in what inmates 'think' or 'believe' to the extent that what they 'think' and 'believe' are specific skills which the particular activities and programmes require, and which can therefore be systematically assessed as criteria of formative behaviour profiling. What is required for effective decision making and behaviour management is a history of behavioural performance in activities and programmes, much like the USA system of Grade Point Averages and attendance. All such behaviours are the natural skills required by the activities and programmes, and all such assessment is criterion reference based.

The alternative, intensional approach, of asking staff to identify risk factors from the documented account of the offence, and subsequently asking staff to look out for them in the inmate's prison behaviour may well only serve to shape inmates to inhibit (conditionally suppress) such behaviour, especially if their progression through the prison system is contingent on this. However, from animal studies of acquisition-extinction-reacquisition, there is no evidence that such behaviour inhibition is likely to produce a permanent change in the inmate's behaviour in the absence of the inmate learning new behaviours. Such an approach is also blind to base rates of behaviours. Only through a system which encouraged the acquisition of new behaviours can we expect there to be a change in risk, and even this would have to be actuarially determined. For a proper estimate of risk, one requires a system where inmates can be assessed with respect to standard demands of the regime. The standard way to determine risk factors was to derive these from statistical analysis, not from clinical (intensional) judgement.

Much of the rationale for this stance can be deduced from the following. Throughout the 20th century, psychologists' evaluation of the extent to which reasoning can be formally taught has been pessimistic. From Thorndike (1913) through Piaget (see Brainerd 1978) to Newell (1980) it has been maintained that:

'the modern.....position is that learned problem-solving skills are, in general, idiosyncratic to the task.'

A. Newell 1980.

Furthermore, it has been argued that whilst people may in fact use abstract inferential rules, these rules can not be formally taught to any significant degree. They are learned instead under natural conditions of development and cannot be improved by formal instruction. This is essentially Piaget's position.

The above is, in fact, how Nisbett et al (1987) opened their Science paper 'Teaching Reasoning'. Reviewing the history of the concept of formal discipline which looked to the use of latin and the classics to train the 'muscles of the mind', Nisbett et. al provided some empirical evidence on the degree to which one class of inferential rules can be taught. They describe these rules as 'a family of pragmatic inferential rule systems that people induce in the context of solving recurrent everyday problems'. These include "causal schemas", "contractual schemas" and "statistical heuristics". The latter are clearly instances of inductive rather than deductive inference. Nisbett et. al. clearly pointed out that the same can not be said for the teaching of deductive inference (i.e. formal instruction in deductive logic or other syntactic rule systems). With respect to the teaching of logical reasoning, Nisbett et. al. had the following to say:

'Since highly abstract statistical rules can be taught in such a way that they can be applied to a great range of everyday life events, is the same true of the even more abstract rules of deductive logic? We can report no evidence indicating that this is true, and we can provide some evidence indicating that it is not.....In our view, when people reason in accordance with the rules of formal logic, they normally do so by using pragmatic reasoning schemas that happen to map onto the solutions provided by logic.' ibid. p.628

Such 'causal schemas' are in fact 'intensional heuristics' (Agnoli and Krantz 1989) and have been widely studied in psychology since the early 1970s, primarily by research psychologists such as Tversky and Kahneman (1974), Nisbett and Ross (1980), Kahneman, Slovic and Tversky (1982), Holland et. al (1986) and Ross and Nisbett (1991).

A longitudinal study by Lehman and Nisbett (1990) looked at differential improvements in the use of such heuristics in college students classified by different subject groups. They found improvements in the use of statistical heuristics in social science students, but no improvement in conditional logic (such as the Wason selection task). Conversely, the natural science and humanities produced significant improvements in conditional logic. Interestingly, there were no changes in students studying chemistry. Whilst the authors took the findings to provide some support for their thesis that reasoning can be taught, it must be appreciated that the findings at the same time lend considerable support to the view that each subject area inculcates its own particular type of reasoning, even in highly educated individuals. That is, the data lend support to the thesis that training in particular skills must look to training for transfer and application within particular skill areas. This is elaborated below in the context of the system of Sentence Management. Today, formal modelling of such intensional processes is researched using a technology known as 'Neural Computing' which uses inferential statistical technologies closely related to regression analysis. However, such technologies are inherently inductive. They take samples and generalise to populations. They are at best pattern recognition systems.

Such technologies must be contrasted with formal deductive logical systems which are algorithmic rather than heuristic (extensional rather than intensional). The algorithmic, or computational, approach is central to classic Artificial Intelligence and is represented today by the technology of relational databases along with rule and Knowledge Information Based System (KIBS) which are based on the First Order Predicate Calculus, the Robinson Resolution Principle (Robinson 1965,1979) and the long term objectives of automated reasoning (e.g. Wos et. al 1992 and the Japanese Fifth Generation computing project). The degree to which intensional heuristics can be suppressed by training is now controversial (Kahneman and Tversky 1983; Nisbett and Ross 1980; Holland et al. 1986; Nisbett et al 1987; Agnoli and Krantz 1989; Gladstone 1989; Fong and Nisbett 1991; Ploger and Wilson 1991; Smith et al 1992). In fact, the degree to which they are or are not may be orthogonal to the main theme of this paper, since the main thrust of the argument is that behaviour science should look to deductive inferential technology, not inductive inference. Central to the controversy, however, is the degree to which the suppression is sustained, and the degree of generalisation and practical application of even 'statistical heuristics'. For example, Ploger and Wilson (1991) said in commentary on the 1991 Fong and Nisbett paper:

'G. T. Fong and R. E. Nisbett argued that, within the domain of statistics, people possess abstract rules; that the use of these rules can be improved by training; and that these training effects are largely independent of the training domain. Although their results indicate that there is a statistically significant improvement in performance due to training, they also indicate that, even after training, most college students do not apply that training to example problems.

D. Ploger & M. Wilson
Statistical reasoning: What is the role of inferential rule training? Comment on Fong and Nisbett.
Journal of Experimental Psychology General; 1991 Jun Vol 120(2) 213-214

Furthermore, Gladstone (1989) criticises the stance adopted by the same group in an article in American Psychologist (1988):

'[This paper]' criticizes the assertion by D. R. Lehman et al. that their experiments support the doctrine of formal discipline. The present author contends that the work of Lehman et al. provides evidence that one must teach for transfer, not that transfer occurs automatically. The problems of creating a curriculum and teaching it must be addressed if teachers are to help students apply a rule across fields. Support is given to E. L. Thorndike's (1906, 1913) assessment of the general method of teaching for transfer.'

R. Gladstone (1989)
Teaching for transfer versus formal discipline.
American Psychologist; 1989 Aug Vol 44(8) 1159

This research suggests is that whilst improvements can be made by training in formal principles (such as teaching the 'Law of Large Numbers'), this does not in fact contradict the stance of Piaget and others that most of these inductive skills are in fact learned under natural lived experience ('erlbnis' and 'lebenswelt' Husserl 1952, or 'Being-in-the-world' Heidegger 1928). Furthermore, there is evidence from short term longitudinal studies of training in such skills that not only is there a decline in such skills after even a short time, but there is little evidence of application of the heuristics to novel problem situations outside the training domain. This is the standard and conventional criticism of 'formal education'. Throughout this work, the basic message seems to be to focus training on specific skills acquisition which will not so much generalise to novel contexts, but find application in other, similar if not identical contexts. Recently, Nisbett and colleagues have looked further at the criteria for assessing the efficacy of cognitive skills training:

'A number of theoretical positions in psychology (including variants of case-based reasoning, instance-based analogy, and connectionist models) maintain that abstract rules are not involved in human reasoning, or at best play a minor role. Other views hold that the use of abstract rules is a core aspect of human reasoning. The authors propose 8 criteria for determining whether or not people use abstract rules in reasoning. They examine evidence relevant to each criterion for several rule systems. There is substantial evidence that several inferential rules, including modus ponens, contractual rules, causal rules, and the law of large numbers, are used in solving everyday problems. Hybrid mechanisms that combine aspects of instance and rule models are considered.'

E. E. Smith, C. Langston and R. E. Nisbett: The case for rules in reasoning.
Cognitive Science; 1992 Jan-Mar Vol 16(1) 1-40

We use rules, it can be argued, when we apply extensionalist strategies which are of course, by design, domain specific. Note that in the history of logic it took until 1879 to discover Quantification Theory. Furthermore, research on deductive reasoning itself suggests strongly that the view developed in this volume is sound:

'Reviews 3 types of computer program designed to make deductive inferences: resolution theorem-provers and goal-directed inferential programs, implemented primarily as exercises in artificial intelligence; and natural deduction systems, which have also been used as psychological models. It is argued that none of these methods resembles the way in which human beings usually reason. They [humans] appear instead to depend, not on formal rules of inference, but on using the meaning of the premises to construct a mental model of the relevant situation and on searching for alternative models of the premises that falsify putative conclusions.'

P. N. Johnson-Laird
Human and computer reasoning.
Trends in Neurosciences; 1985 Feb Vol 8(2) 54-57
'Contends that the orthodox view in psychology is that people use formal rules of inference like those of a natural deduction system. It is argued that logical competence depends on mental models rather than formal rules. Models are constructed using linguistic and general knowledge; a conclusion is formulated based on the model that maintains semantic information, expresses it parsimoniously, and makes explicit something not directly stated by the premise. The validity of the conclusion is tested by searching for alternative models that might refute the conclusion. The article summarizes a theory developed in a 1991 book by P. N. Johnson-Laird and R. M. Byrne.'

P. N. Johnson-Laird & R. M. Byrne
Precis of Deduction.
Behavioral and Brain Sciences; 1993 Jun Vol 16(2) 323-380

That is, human reasoning tends to focus on content or intension. As has been argued elsewhere, such heuristic strategies invariably suffer as a consequence of their context specificity and constraints on working memory capacity.

5. The Methodological Plight of Contemporary Experimental Psychology

Inductive inferential technology par excellence, ie Neyman-Pearson hypothesis testing, or more accurately, conclusions drawn using that technology, has not been without its critics. This section is a critique of standard inferential statistical, with a recommendation for more emphasis on basic descriptive statistics and linear modelling. The most valuable contribution of specialists are their skills in deductive rather than inductive logic. Rather than training staff in the use of heuristics, we should perhaps be providing them with specific formal roles, ie functions, which require the practice of formal deductive skills. Here is how Meehl (1978) reviewed the standard (inductive) methodological approach adopted by most psychologists:

'I suggest to you that Sir Ronald has befuddled us, mesmerised us, and led us down the primrose path. I believe that the most universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology'.

P. E. Meehl
Theoretical Risks and Tabular Asterisks:
Sir Karl and Sir Ronald and The Slow Progress of Soft Psychology.
J Consulting and Clinical Psychology 1978,45,4,p806-34

The contrasting approach of point prediction refutation is the falsificationism of Sir Karl (Popper). In 1967, Meehl made the point very clearly:

'I conclude that the effect of increased precision, whether achieved by improved instrumentation and control, greater sensitivity in the logical structure of the experiment, or increasing the number of observations, is to yield a probability approaching 1/2 of corroborating our substantive theory by a significance test, even if the theory is totally without merit. That is to say, the ordinary result of improving our experimental methods and increasing our sample size, proceeding in accordance with the traditionally accepted method of theory-testing by refuting a directional null hypothesis, yields a prior probability ' 1/2 and very likely somewhat above that value by an unknown amount. It goes without saying that successfully negotiating and experimental hurdle of this sort can constitute only an extremely weak corroboration of any substantive theory, quite apart from currently disputed issues of the Bayesian type regarding the assignment of prior probabilities to the theory itself. So far as I am able to discern, this methodological truth is either unknown or systematically ignored by most behaviour scientists. I do not know to what extent this is attributable to confusion between the substantive theory T and the statistical hypothesis H1, with the resulting mis-assignment of the probability (1-p) complementary to the significance level p attained, to the "probability" of the substantive theory; or to what extent it arises from insufficient attention to the truism that the point-null hypothesis H0 is [quasi] always false. It seems unlikely that most social science investigators would think in their usual way about a theory in meteorology which "successfully predicted" that it would rain on the 17th of April, given the antecedent information that it rains (on the average) during half the days in the month of April.
But this is not the worst of the story. Inadequate appreciation of the extreme weakness of the test to which a substantive theory T is subjected by merely predicting a directional statistical difference d ' 0 is then compounded by a truly remarkable failure to recognize the logical asymmetry between, on the one hand, (formally invalid) "confirmation" of a theory via affirming the consequent in an argument of form [T ' H1, H1, infer T], & on the other hand the deductively tight REFUTATION of the theory modus tollens by a falsified prediction, the logical form being: [T ' H1, ~H1, infer ~T].
While my own philosophical predilections are somewhat Popperian, I dare say any reader will agree that no full-fledged Popperian philosophy of science is presupposed in what I have just said. The destruction of a theory *modus tollens* is, after all, a matter of deductive logic; whereas that the "confirmation" of a theory by its making successful predictions involves a much weaker kind of inference. This much would be conceded by even the most anti-Popperian "inductivist".
The writing of behavior scientists often reads as though they assumed - what it is hard to believe anyone would explicitly assert if challenged - that successful and unsuccessful predictions are practically on all fours in arguing for and against a substantive theory.'

P. E. Meehl (1967)
Theory Testing in Psychology and Physics: A Methodological Paradox.
Philosophy of Science, p.111-2 June 1967

Rozeboom (1960), Bolles (1962), Bakan (1966) and Lykken (1968) made similar points throughout the 1960s. Cohen (1990), in a remarkably well written paper reviewed the dire situation as follows:

'Over the years, I have learned not to make errors of the following kinds:
When a Fisherian null hypothesis is rejected with an associated probability of, for example, .026, it is not the case that the probability that the null hypothesis is true is .026 (or less than .05, or any other value we can specify). Given our framework of probability as long-run relative frequency -as much as we might wish it to be otherwise - this result does not tell us about the truth of the null hypothesis, given the data. (For this we have to go to Bayesian or likelihood statistics, in which probability is not relative frequency but degree of belief.) What it tells us is the probability of the data, given the truth of the null hypothesis - which is not the same thing, as much as it may sound like it.
If the p value with which we reject the Fisherian null hypothesis does not tell us the probability that the null hypothesis is true, it certainly cannot tell us anything about the probability that the research or alternative hypothesis is true. In fact, there is no alternate hypothesis in Fisher's scheme: Indeed, he violently opposed its inclusion by Neyman and Pearson.
Despite widespread misconceptions to the contrary, the rejection of a given null hypothesis gives us no basis for estimating the probability that a replication of the research will again result in rejecting that null hypothesis.
Of course, everyone knows that failure to reject the Fisherian null hypothesis does not warrant the conclusion that it is true. Fisher certainly knew and emphasized it, and our textbooks duly so instruct us. Yet how often do we read in the discussion and conclusions of articles now appearing in our most prestigious journals that "there is no difference" or "no relationship"?
The other side of this coin is the interpretation that accompanies results that surmount the .05 barrier and achieve the state of grace of "statistical significance". "Everyone" knows that all this means is that the effect is not nil, and nothing more. Yet how often do we see such a result to be taken to mean, at least implicitly, that the effect is significant, that is, important, large. If a result is highly significant, say p<0.001, the temptation to make this misinterpretation becomes all but irresistible.
Let's take a close look at this null hypothesis - the fulcrum of the Fisherian scheme - that we so earnestly seek to negate. A null hypothesis is any precise statement about a state of affairs in a population, usually the value of a parameter, frequently 0. It is called a "null" hypothesis because it means "nothing doing". Thus, "The difference in the mean score of U.S. men and women on an Attitude Toward the U.N. scale is zero" is a null hypothesis. "The product-moment r between height and IQ in high school students is zero" is another. "The proportion of men in a population of adult dyslexics is .50" is yet another. Each is a precise statement - for example, if the population r between height and IQ is in fact .03, the null hypothesis that it is zero is false. It is also false if the r is .01, .001, or .000001!.
A little thought reveals a fact widely understood by statisticians: The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is always false in the real world. It can only be true in the bowels of a computer processor running a Monte Carlo study (and even then a stray electron may make it false. If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what's the big deal about rejecting it?'

J. Cohen (1990)
What I Have Learned (So Far)
American Psychologist, Dec 1990 p.1307-1308

Lykken (1968) had simply pointed out:

'Most theories in the areas of personality, clinical, and social psychology predict no more than the direction of a correlation, group difference, or treatment effect. Since the null hypothesis is never strictly true, such predictions have about a 50-50 chance of being confirmed by experiment when the theory in question is false, since the statistical significance of the result is a function of the sample size.'

It is this contrast between testing, ie falsifying, a theory or hypothesis by such a weak criterion as the above, compared to making point predictions (testing conjunctions of statements by modus tollens) as Popper urges, that led Meehl to write his paper on 'Theoretical Risks and Tabular Asterisks..' in 1978, lamenting on the slow progress in soft psychology which is the consequence of not appreciating how weak a test the Neyman-Pearson actually procedure is. But perhaps the worst of it that although Cohen (1962) undertook a power (power'1-beta, where beta is the likelihood of a type II error) survey of the articles in the 1960 volume of the Journal of Abnormal and Social Psychology in which he found that the median power to detect a medium effect size under representative conditions was only .46 (ie worse than chance), Sedlmeier and Gigerenzer (1989) published a paper entitled "Do studies of Statistical Power Have an Effect on the Power of Studies." in which they replicated the study on the 1984 Journal of abnormal Psychology and found that the median power under the same conditions was .44, a little worse than the original .46. Apart from no improvement over the years, and providing substantial empirical evidence for what Lakatos has to say below, what does this mean? Cohen had this to say:

'When I finally stumbled onto power analysis, and managed to overcome the handicap of a background with no working math beyond high school algebra (to say nothing of mathematical statistics), it was as if I had died and gone to heaven. After I learned what noncentral distributions were and figured out that it was important to decompose noncentrality parameters into their constituents of effect size and sample size; I realized that I had a framework for hypothesis testing that had four parameters; the alpha significance criterion, the sample size, the population effect size, and the power of the test. For any statistical test, any one of these was a function of the other three. This meant for example, that for a significance test of a product-moment correlation, using a two-sided .05 alpha criterion and a sample size of 50 cases, if the population correlation is .30, my long-run probability of rejecting the null hypothesis and finding the sample correlation to be significant was .57, a coin flip. As another example, for the same alpha'.05 and population r'0.30, if I want to have .80 power, I could determine that I needed a sample size of 85.'

J. Cohen (1990) p.1308

And this was Lakatos' earlier conclusion:

'The requirement of continuous growth...hits patched-up, unimaginative series of pedestrian "empirical" adjustments which are so frequent, for instance in modern social psychology. Such adjustments may, with the help of so-called "statistical techniques" make some "novel" predictions and may even conjure up some irrelevant grains of truth in them. But this theorising has no unifying idea, no heuristic power, no continuity. They do not add up to a genuine research programme and are on the whole, worthless..
After reading Meehl (1967) and Lykken (1968) one wonders whether the function of statistical techniques in the social sciences is not primarily to provide a machinery for producing phoney corroborations and thereby a semblance of "scientific progress" where, in fact, there is nothing but an increase in pseudo-intellectual garbage.'

I Lakatos (1978) p88-9
Falsification and the methodology of scientific research programs
The Methodology of Scientific Research Programs: Imre Lakatos
Philosophical Papers (vol 1 pp 139-67) Eds. Worrall & Currie.

Guttman (1976;1985) has made similar remarks within the professional statistical literature:

'Many practitioners have become disillusioned with declarative inference, especially that of hypothesis testing. For example, according to Carver 'statistical significance testing has involved more fantasy than fact. The emphasis on statistical significance over scientific significance in education and research represents a corrupt form of the scientific method. Educational research would be better off if it stopped testing its results for statistical significance'. The 'significance' testing referred to here is largely according to Neyman-Pearson theory. We shall marshall arguments against such testing, leading to the conclusion that it be abandoned by all substantive science and not just by educational research and other social sciences which have begun to raise voices against the virtual tyranny of this branch of inference in the academic world.'

L. Guttman (1985) (my emphasis)
The Illogic of Statistical Inference for Cumulative Science
Applied Stochastic Models and Data Analysis Vol 1, 3-10

Things have not changed much recently:

'It is not at all clear why researchers continue to ignore power analysis. The passive acceptance of this state of affairs by editors and reviewers is even more of a mystery. At least part of the problem may be the low level of consciousness about effect size: It is as if the only concern about magnitude in much psychological research is with regard to the statistical test result and its accompanying p value, not with regard to the psychological phenomenon under study.'

J. Cohen (1992)
A Power Primer: Quantitative Methods in Psychology:
Psychological Bulletin 112,1,155-159

If not via the classic, albeit 'hybrid' (Gigerenzer 1993), methodology of inductive inferential hypothesis testing, what practical form can a naturalistic science and technology of behaviour take? The solution being urged in the PROBE project is a) historical, b) descriptive and c) deductive. It requires psychologists to simply record and extensionally analyse histories of behaviour categorised according to finite reference classes (which take on specific valid values) in conjunction with dates, times and locations. In effect it requires them to learn a lesson from Quine (1960,1992,1995) which advocates an Observation Statement/Observation Categorical testing, and relational approach to the analysis of behaviour aqs physically observable events. In terms of a popular cliché, 'Why look into a crystal ball when you can read the book?' And yet, what is required is sound records of what inmates achieve whilst in custody, ie what they do. It is through differential analysis of these records of attainment that decisions can be made about differential management. Yet in ensuring that good records are kept, it is also important that those records do not go beyond the actual records of what is actually done, leaving statistical, or actuarial analysis to determine how the individual case is conceived.

The majority of staff employed by the Prison Service are already performing tasks which could be classed as work in behaviour management, but all too many confuse behavioural measures with psychological factors, or, alternatively, equate the word 'behaviour' with a limited class of actions, usually with a social consequence. In reality, skills with vocabulary, grammar, counting, and all other skills taught by instructors and teachers are behaviours, and can be recorded as skills. Behaviour, in this sense, is no more or less than observable, recordable action or performance. What is required is a professional service in analysis of such performance using the same quantitative technology brought to bear in other areas of physical science and technology. If psychologists limited themselves to helping other staff to record and analyse measures of behaviour as functions of the regime in which they occur, the Prison Service would have an effective science and technology of behaviour along with a clear framework for both recruitment and staff training of such professionals.

In recent years, a good number of academics have made recommendations consistent with such a role, Cohen (1990) for instance had the following to say:

'Despite my career-long identification with statistical inference, I believe, together with such luminaries as Meehl (1978), Tukey (1977), and Gigerenzer (Gigerenzer and Murray 1987), that hypothesis testing has been greatly overemphasized in psychology and in the other disciplines that use it. It has diverted our attention from crucial issues. Mesmerized by a single all-purpose, mechanized, "objective" ritual in which we convert numbers into other numbers and get a yes-no answer, we have come to neglect close scrutiny of where the numbers came from....So, how should I use statistics in psychological research? First of all, descriptively. John Tukey's (1977) Exploratory Data Analysis is an inspiring account of how to effect graphic and numerical analyses of the data at hand so as to understand them. The techniques, although subtle in conception, are simple in application, requiring no more than paper and pencil (Tukey says if you have a hand-held calculator, fine).......he manages to fill 700 pages with techniques of "mere" description, pointing out in the preface that the emphasis on inference in modern statistics has resulted in a loss of flexibility in data analysis.'

J. Cohen (1990)
American Psychologist Dec p.1310

As Gigerenzer (1987, 1988, 1993) has pointed out, some of the bewilderment one experiences in teaching statistics mentioned at the beginning of this volume can be accounted for by Latent Inhibition, i.e. students have largely been inadequately prepared as undergraduates. In 1986, Meehl proposed a thesis which he urges us to take literally:

'Thesis: Owing to the abusive reliance upon significance testing - rather than point or interval estimation, curve shape, or ordination - in the social sciences, the usual article summarizing the state of the evidence on a theory (such as appears in the Psychological Bulletin) is nearly useless .... I think it is scandalous that editors still accept manuscripts in which the author presents tables of significance tests without giving measures of overlap or such basic descriptive statistics as might enable the reader to do rough computations, from means and standard deviations presented, as to what the overlap is.'

P. E. Meehl (1986)
What Social Scientists Don't Understand
in Metatheory in Social Science: Eds D. W. Fiske & R. A. Shweder p.325

The main difficulty lies perhaps in the context specificity of all learning, - the failure of Leibniz's Law within epistemic contexts. In the Sentence Management system, such intensionalist opacity is averted by making all observations of behaviour relative to demands of the environment specified under Function 17 of the Governors Contract, specified a priori, on RM-1s. The analytical and management technology is declarative, criterion referenced, deductive, and extensional. Its detailed presentation is available in Volume 2 of 'A System Specification For PROfiling Behaviour' which almost exclusively presents the findings as Tukey box plots and other descriptive statistics.

6. LOGICAL (Extensional) VS. INTUITIVE (Intensional) JUDGMENT

It will help if an idea of what we mean by 'clinical' and 'actuarial' judgement is provided. The following is taken from a an early (Meehl 1954), and a relatively recent review of the status 'Clinical vs. Actuarial Judgement' by Dawes, Faust and Meehl (1989):

'One of the major methodological problems of clinical psychology concerns the relation between the"clinical" and "statistical" (or "actuarial") methods of prediction. Without prejudging the question as to whether these methods are fundamentally different, we can at least set forth the main difference between them as it appears superficially. The problem is to predict how a person is going to behave. In what manner should we go about this prediction?
We may order the individual to a class or set of classes on the basis of objective facts concerning his life history, his scores on psychometric tests, behavior ratings or check lists, or subjective judgements gained from interviews. The combination of all these data enables us to CLASSIFY the subject; and once having made such a classification, we enter a statistical or actuarial table which gives the statistical frequencies of behaviors of various sorts for persons belonging to the class. The mechanical combining of information for classification purposes, and the resultant probability figure which is an empirically determined relative frequency, are the characteristics that define the actuarial or statistical type of prediction.
Alternatively, we may proceed on what seems, at least, to be a very different path. On the basis of interview impressions, other data from the history, and possibly also psychometric information of the same type as in the first sort of prediction, we formulate, as a psychiatric staff conference, some psychological hypothesis regarding the structure and the dynamics of this particular individual. On the basis of this hypothesis and certain reasonable expectations as to the course of other events, we arrive at a prediction of what is going to happen. This type of procedure has been loosely called the clinical or case-study method of prediction'.

P. E. Meehl (1954)
The Problem: Clinical vs. Statistical Prediction
'In the clinical method the decision-maker combines or processes information in his or her head. In the actuarial or statistical method the human judge is eliminated and conclusions rest solely on empirically established relations between data and the condition or event of interest. A life insurance agent uses the clinical method if data on risk factors are combined through personal judgement. The agent uses the actuarial method if data are entered into a formula, or tables and charts that contain empirical information relating these background data to life expectancy.
Clinical judgement should not be equated with a clinical setting or a clinical practitioner. A clinician in psychiatry or medicine may use the clinical or actuarial method. Conversely, the actuarial method should not be equated with automated decision rules alone. For example, computers can automate clinical judgements. The computer can be programmed to yield the description "dependency traits", just as the clinical judge would, whenever a certain response appears on a psychological test. To be truly actuarial, interpretations must be both automatic (that is, prespecified or routinized) and based on empirically established relations.'

R. Dawes, D. Faust & P. Meehl (1989)
Clinical Versus Actuarial Judgement Science v243, pp 1668-1674 (1989)

As long ago as 1941, Lundberg made it clear that any argument between those committed to the 'clinical' (intuitive) stance and those arguing for the 'actuarial' (statistical) was a pseudo-argument, since all the clinician could possibly be making his or her decision on was his or her limited experience (database) of past cases and outcomes.

'I have no objection to Stouffer's statement that "if the case-method were not effective, life insurance companies hardly would use it as they do in supplementing their actuarial tables by a medical examination of the applicant in order to narrow their risks." I do not see, however, that this constitutes a "supplementing" of actuarial tables. It is rather the essential task of creating specific actuarial tables. To be sure, we usually think of actuarial tables as being based on age alone. But on the basis of what except actuarial study has it been decided to charge a higher premium (and how much) for a "case" twenty pounds overweight, alcoholic, with a certain family history, etc.? These case-studies have been classified and the experience for each class noted until we have arrived at a body of actuarial knowledge on the basis of which we "predict" for each new case. The examination of the new case is for the purpose of classifying him as one of a certain class for which prediction is possible.'

G. Lundberg (1941)
Case Studies vs. Statistical Methods - An Issue Based on Misunderstanding.
Sociometry v4 pp379-83 (1941)

A few years later, Meehl (1954), drawing on the work of Lundberg (1941) and Sarbin (1941) in reviewing the relative merits of clinical vs. statistical prediction (judgement) reiterated the point that all judgements about an individual are always referenced to a class, they are always therefore, probability judgements.

'No predictions made about a single case in clinical work are ever certain, but are always probable. The notion of probability is inherently a frequency notion, hence statements about the probability of a given event are statements about frequencies, although they may not seem to be so. Frequencies refer to the occurrence of events in a class; therefore all predictions; even those that from their appearance seem to be predictions about individual concrete events or persons, have actually an implicit reference to a class....it is only if we have a reference class to which the event in question can be ordered that the possibility of determining or estimating a relative frequency exists.. the clinician, if he is doing anything that is empirically meaningful, is doing a second-rate job of actuarial prediction. There is fundamentally no logical difference between the clinical or case-study method and the actuarial method. The only difference is on two quantitative continua, namely that the actuarial method is more EXPLICIT and more PRECISE.'

P. Meehl (1954)
Clinical vs. Statistical Prediction:
A Theoretical Analysis and a Review of the Evidence

There has, unfortunately, over the years, been a strong degree of resistance to the actuarial approach. It must be appreciated however, that the technology to support comprehensive actuarial analysis and judgment has only been physically available since the 1940s with the invention of the computer. Practically speaking, it has only been available on the scale we are now discussing since the late 1970s with the development of sophisticated DBMS's (databases with query languages based on the Predicate Calculus; Codd 1970; Gray 1984; Gardarin and Valduriez 1989, Date 1992), and the development and mass production of powerful and cheap microcomputers. Minsky and Papert (1988) in their expanded edition of 'Perceptrons' (basic pattern recognition systems) wrote:

'The goal of this study is to reach a deeper understanding of some concepts we believe are crucial to the general theory of computation. We will study in great detail a class of computations that make decisions by weighting evidence.....The people we want most to speak to are interested in that general theory of computation.'

M. L. Minsky & S. A. Papert (1969,1990)
Perceptrons p.1

The 'general theory of computation' is, as elaborated elsewhere, 'Recursive Function Theory' (Church 1936, Kleene 1936, Turing 1937), and is essentially the approach being advocated here as evidential behaviourism, or eliminative materialism which eschews psychologism and intensionalism. Nevertheless, as late as 1972, Meehl still found he had to say:

'I think it is time for those who resist drawing any generalisation from the published research, by fantasising about what WOULD happen if studies of a different sort WERE conducted, to do them. I claim that this crude, pragmatic box score IS important, and that those who deny its importance do so because they just don't like the way it comes out. There are few issues in clinical, personality, or social psychology (or, for that matter, even in such fields as animal learning) in which the research trends are as uniform as this one. Amazingly, this strong trend seems to exert almost no influence upon clinical practice, even, you may be surprised to learn, in Minnesota!...It would be ironic indeed (but not in the least surprising to one acquainted with the sociology of our profession) if physicians in nonpsychiatric medicine should learn the actuarial lesson from biometricians and engineers, whilst the psychiatrist continues to muddle through with inefficient combinations of unreliable judgements because he has not been properly instructed by his colleagues in clinical psychology, who might have been expected to take the lead in this development.
I understand (anecdotally) that there are two other domains, unrelated to either personality assessment or the healing arts, in which actuarial methods of data combination seem to do at least as good a job as the traditional impressionistic methods: namely, meteorology and the forecasting of security prices. From my limited experience I have the impression that in these fields also there is a strong emotional resistance to substituting formalised techniques for human judgement. Personally, I look upon the "formal-versus-judgmental" issue as one of great generality, not confined to the clinical context. I do not see why clinical psychologists should persist in using inefficient means of combining data just because investment brokers, physicians, and weathermen do so. Meanwhile, I urge those who find the box score "35:0" distasteful to publish empirical studies filling in the score board with numbers more to their liking.'

P. E. Meehl (1972)
When Shall We Use Our Heads Instead of the Formula?
PSYCHODIAGNOSIS: Collected Papers (1971)

In 1982, Kahneman, Slovic and Tversky, in their collection of papers on (clinical) judgement under conditions of uncertainty, prefaced the book with the following:

'Meehl's classic book, published in 1954, summarised evidence for the conclusion that simple linear combinations of cues outdo the intuitive judgements of experts in predicting significant behavioural criteria. The lasting intellectual legacy of this work, and of the furious controversy that followed it, was probably not the demonstration that clinicians performed poorly in tasks that, as Meehl noted, they should not have undertaken. Rather, it was the demonstration of a substantial discrepancy between the objective record of people's success in prediction tasks and the sincere beliefs of these people about the quality of their performance. This conclusion was not restricted to clinicians or to clinical prediction: People's impressions of how they reason, and how well they reason, could not be taken at face value.'

D. Kahneman, P. Slovic & A. Tversky (1982)
Judgment Under Conditions of Uncertainty: Heuristics and Biases

Earlier in 1977, reviewing the Attribution Theory literature evidence on individuals' access to the reasons for their behaviours, Nisbett and Wilson (1977) summarised the work as follows:

'....there may be little or no direct introspective access to higher order cognitive processes. Ss are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.'

R. Nisbett & T. Wilson (1977)
Telling More Than We Can Know: Public Reports on Private Processes

Such rules of thumb or attributions, are of course the intensional heuristics studied by Tversky and Kahneman (1973), or the 'function approximations' computed by in connection weights (both in artificial and real neural networks).

Mathematical logicians such as Putnam (1975,1988); Elgin 1990 and Devitt (1990) have long been arguing that psychologists may, as Skinner (1971,1974) argued consistently, be looking for their data in the wrong place. Despite the empirical evidence from research in psychology on the problems of self report, and a good deal more drawn from decision making in medical diagnosis, the standard means of obtaining information for 'reports' on inmates for purposes of review, and the standard means of assessing inmates for counselling is on the basis of clinical interview. In the closed environments such as prisons, this makes little sense, since it is possible to directly observe behaviour under the relatively natural conditions of everyday activities.

Yet the clinical interview is still at the basis of much of the work of the prison psychologist despite the extensive literature on the deficiencies of self-report, the inevitable constraints upon behaviour imposed by the interview setting and the availability of a controlled environment. The deficiencies of clinical judgment have been widely documented:

'The previous review of this field (Slovic, Fischoff & Lichtenstein 1977) described a long list of human judgmental biases, deficiencies, and cognitive illusions. In the intervening period this list has both increased in size and influenced other areas of psychology (Bettman 1979, Mischel 1979, Nisbett & Ross 1980).'

H. Einhorn and R. Hogarth (1981)

A decade earlier, Goldberg (1968) wrote:

'If one considers the rather typical findings that clinical judgments tend to be (a) rather unreliable (in at least two of the three senses of that term), (b) only minimally related to the confidence and amount of experience of the judge, (c) relatively unaffected by the amount of information available to the judge, and (d) rather low in validity on an absolute basis, it should come as no great surprise that such judgments are increasingly under attack by those who wish to substitute actuarial prediction systems for the human judge in many applied settings....I can summarize this ever-growing body of literature by pointing out that over a very large array of clinical judgment tasks (including by now some which were specifically selected to show the clinician at his best and the actuary at his worst), rather simple actuarial formulae typically can be constructed to perform at a level no lower than that of the clinical expert.'

L. R. Goldberg (1968)
Simple models or simple processes? Some research on clinical judgments
American Psychologist, 1968, 23(7) p.483-496

and Meehl summarised the evidence as follows:

'The various studies can thus be viewed as repeated sampling from a uniform universe of judgement tasks involving the diagnosis and prediction of human behavior. Lacking complete knowledge of the elements that constitute this universe, representativeness cannot be determined precisely. However, with a sample of about 100 studies and the same outcome obtained in almost every case, it is reasonable to conclude that the actuarial advantage is not exceptional but general and likely to encompass many of the unstudied judgement tasks. Stated differently, if one poses the query: Would an actuarial procedure developed for a particular judgement task (say, predicting academic success at my institution) equal or exceed the clinical method?", the available research places the odds solidly in favour of an affirmative reply. "There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly....as this one (Meehl J. Person. Assess, 50,370 (1986)".'

The distinction between collecting observations and integrating it for analysis was further brought out vividly by Meehl (1989):

'Surely we all know that the human brain is poor at weighting and computing. When you check out at a supermarket you don't eyeball the heap of purchases and say to the clerk, "well it looks to me as if it's about $17.00 worth; what do you think?" The clerk adds it up. There are no strong arguments....from empirical studies.....for believing that human beings can assign optimal weight in equations subjectively or that they apply their own weights consistently.'

P. Meehl (1986)
Causes and effects of my disturbing little book
J Person. Assess. 50,370-5,1986
'Distributional information, or base-rate data, consist of knowledge about the distribution of outcomes in similar situations. In predicting the sales of a new novel, for example, what one knows about the author, the style, and the plot is singular information, whereas what one knows about the sales of novels is distributional information. Similarly, in predicting the longevity of a patient, the singular information includes his age, state of health, and past medical history, whereas the distributional information consists of the relevant population statistics. The singular information consists of the relevant features of the problem that distinguish it from others, while the distributional information characterises the outcomes that have been observed in cases of the same general class. The present concept of distributional data does not coincide with the Bayesian concept of a prior probability distribution. The former is defined by the nature of the data, whereas the latter is defined in terms of the sequence of information acquisition.
The tendency to neglect distributional information and to rely mainly on singular information is enhanced by any factor that increases the perceived uniqueness of the problem. The relevance of distributional data can be masked by detailed acquaintance with the specific case or by intense involvement with it........
The prevalent tendency to underweigh or ignore distributional information is perhaps the major error of intuitive prediction. The consideration of distributional information, of course, does not guarantee the accuracy of forecasts. It does, however, provide some protection against completely unrealistic predictions. The analyst should therefore make every effort to frame the forecasting problem so as to facilitate utilising all the distributional information that is available to the expert.'

A. Tversky & D. Kahneman (1983)
Extensional Versus Intuitive Reasoning: The Conjunction
Fallacy in Probability Judgment Psychological Review v90(4) 1983

The implications are that the professional has little option but to trust measurement and analysis as the basis for his or her professional advice, and more importantly, to eschew all else.

'The possession of unique observational capacities clearly implies that human input or interaction is often needed to achieve maximal predictive accuracy (or to uncover potentially useful variables) but tempts us to draw an additional, dubious inference. A unique capacity to observe is not the same as a unique capacity to predict on the basis of integration of observations. As noted earlier, virtually any observation can be coded quantitatively and thus subjected to actuarial analysis. As Einhorn's study with pathologists and other research shows, greater accuracy may be achieved if the skilled observer performs this function and then steps aside, leaving the interpretation of observational and other data to the actuarial method.'
R. Dawes, D. Faust and P. Meehl (1989)
ibid.

7. Generalisation of Skills: Behaviour Modification or Effective Management and Planning?

'No predictions made about a single case in clinical work are ever certain, but are always probable. The notion of probability is inherently a frequency notion, hence statements about the probability of a given event are statements about frequencies, although they may not seem to be so. Frequencies refer to the occurrence of events in a class; therefore all predictions; even those that from their appearance seem to be predictions about individual concrete events or persons, have actually an implicit reference to a class....it is only if we have a reference class to which the event in question can be ordered that the possibility of determining or estimating a relative frequency exists..... the clinician, if he is doing anything that is empirically meaningful, is doing a second-rate job of actuarial prediction. There is fundamentally no logical difference between the clinical or case-study method and the actuarial method. The only difference is on two quantitative continua, namely that the actuarial method is more explicit and more precise.'

P. E. Meehl (1954)
Clinical versus Statistical Prediction
A Theoretical Analysis and a Review of the Evidence

Monitoring behaviour is one essential function of the PROBE system which supported the work of psychologists and managers in the area of inmate control within the English Prison Service between 1986 and 1994. Effective control of behaviour requires staff and inmates to make use of that information in the interests of programming or shaping behaviour in a pro-social (non-delinquent) direction. The Sentence Management and Planning system, covered at length in volume 2 of the series 'A System Specification for PROfiling Behaviour' (Longley 1994) details how.

If there is to be any change in an inmate's behaviour after release, there will need to be a change in behaviour from the time he was convicted, either through acquisition of new behaviours or simple maturation (as in the age-report rate function). In ascertaining the characteristic behaviour of classes, it is not that we make predictions of future behaviour, but that we describe behaviour characteristic of classes. This is clearly seen in discriminant analysis and regression in general. We analyse the relationship between one class and others, and, providing that an individual can be allocated to one class or another, we can say, as a consequence of his class membership, what other characteristics are likely to be the case as a function of that class membership. Temporality, i.e. pre-diction has little to do with the objective of identifying such relations.

Any system which provides a record of skill acquisition during sentence must therefore be an asset in the long term management of inmates towards this objective. However, research in education and training, perhaps the most practical areas of application of Learning Theory, clearly endorse the thesis of the context specificity of the intensional. Some of the most influential models of cognitive processing in the early to mid 1970s took context as critical for encoding and recall of memory (Tulving and Thompson 1972). Generalisation Theory, ie that area of research which looks at transfer-of-training has almost unequivocally concluded that learning is context specific. Empirical research supports the logical conclusion that skill acquisition does not readily transfer from one task to another. This is another illustration of the failure of substitutivity in psychological contexts. In fact, upon detailed analysis, many of the attractive notions of intensionalism, so characteristic of cognitivism, may reveal themselves to be vacuous on closer analysis:

'Generalizability theory (Cronbach, Gleser, Nada & Rajaratnam 1972; see also, Brennan, 1983; Shavelson, Webb, & Rowley, 1989) provides a natural framework for investigating the degree to which performance assessment results can be generalised. At a minimum, information is needed on the magnitude of variability due to raters and to the sampling of tasks. Experience with performance assessments in other contexts such as the military (e.g. Shavelson, Mayberry, Li & Webb, 1990) or medical licensure testing (e.g. Swanson, Norcini, & Grosso, 1987) suggests that there is likely substantial variability due to task. Similarly, generalizability studies of direct writing assessments that manipulate tasks also indicate that the variance component for the sampling of tasks tends to be greater than for the sampling of raters (Breland, Camp, Jones, Morris, & Rock, 1987; Hieronymous & Hoover 1986).
Shavelson, Baxter & Pine (1990) recently investigated the generalizability of performance across different hands-on performance tasks such as experiments to determine the absorbency of paper towels and experiments to discover the reactions of sowbugs to light and dark and to wet and dry conditions. Consistent with the results of other contexts, Shavelson et al. found that performance was highly task dependent. The limited generalizability from task to task is consistent with research in learning and cognition that emphasizes the situation and context-specific nature of thinking (Greeno, 1989).'

R. L. Linn, E. L. Baker & S. B. Dunbar (1991)
Complex, Performance-Based Assessment: Expectations and Validation Criteria
Educational Researcher, vol 20, 8, pp15-21

Intensionalists, holding that what happens inside the head matters, ie that intension determines extension, appeal to our common, folk psychological intuitions to support arguments for the merits of abstract cognitive skills. However, such strategies are not justified on the basis of educational research.

'Critics of standardized tests are quick to argue that such instruments place too much emphasis on factual knowledge and on the application of procedures to solve well-structured decontextualized problems (see e.g. Frederiksen 1984). Pleas for higher order thinking skills are plentiful. One of the promises of performance-based assessments is that they will place greater emphasis on problem solving, comprehension, critical thinking, reasoning, and metacognitive processes. These are worthwhile goals, but they will require that criteria for judging all forms of assessment include attention to the processes that students are required to exercise.
It should not simply be assumed, for example, that a hands-on scientific task encourages the development of problem solving skills, reasoning ability, or more sophisticated mental models of the scientific phenomenon. Nor should it be assumed that apparently more complex, open-ended mathematics problems will require the use of more complex cognitive processes by students. The report of the National Academy of Education's Committee that reviewed the Alexander-James (1987) study group report on the Nation's Report Card (National Academy of Education, 1987) provided the following important caution in that regard:
It is all too easy to think of higher-order skills as involving only difficult subject matter as, for example, learning calculus. Yet one can memorize the formulas for derivatives just as easily as those for computing areas of various geometric shapes, while remaining equally confused about the overall goals of both activities. (p.54)
The construction of an open-ended proof of a theorem in geometry can be a cognitively complex task or simply the display of a memorized sequence of responses to a particular problem, depending on the novelty of the task and the prior experience of the learner. Judgments regarding the cognitive complexity of an assessment need to start with an analysis of the task; they also need to take into account student familiarity with the problems and the ways in which students attempt to solve them.'

ibid p. 19

Skills do not seem to generalise well. Dretske (1980) put the issue as follows:

'If I know that the train is moving and you know that its wheels are turning, it does not follow that I know what you know just because the train never moves without its wheels turning. More generally, if all (and only) Fs are G, one can nonetheless know that something is F without knowing that it is G. Extensionally equivalent expressions, when applied to the same object, do not (necessarily) express the same cognitive content. Furthermore, if Tom is my uncle, one can not infer (with a possible exception to be mentioned later) that if S knows that Tom is getting married, he thereby knows that my uncle is getting married. The content of a cognitive state, and hence the cognitive state itself, depends (for its identity) on something beyond the extension or reference of the terms we use to express the content. I shall say, therefore, that a description of a cognitive state, is non-extensional.'

F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294

As noted above, this is corroborated by transfer of training research:

'Common descriptions of skills are not, it is concluded, an adequate basis for predicting transfer. Results support J. Fotheringhame's finding that core skills do not automatically transfer from one context to another.'

C. Myers
Core skills and transfer in the youth training schemes: A field study of trainee motor mechanics.
Journal of Organizational Behavior;1992 Nov Vol13(6) 625-632
'G. T. Fong and R. E. Nisbett (1993) claimed that human problem solvers use abstract principles to accomplish transfer to novel problems, based on findings that Ss were able to apply the law of large numbers to problems from a different domain from that in which they had been trained. However, the abstract-rules position cannot account for results from other studies of analogical transfer that indicate that the content or domain of a problem is important both for retrieving previously learned analogs (e.g., K. J. Holyoak and K. Koh, 1987; M. Keane, 1985, 1987; B. H. Ross, 1989) and for mapping base analogs onto target problems (Ross, 1989). It also cannot account for Fong and Nisbett's own findings that different-domain but not same-domain transfer was impaired after a 2-wk delay. It is proposed that the content of problems is more important in problem solving than supposed by Fong and Nisbett.'

L. M. Reeves & R. W. Weisberg
Abstract versus concrete information as the basis for transfer in problem solving: Comment on Fong and Nisbett (1991).
Journal of Experimental Psychology General; 1993 Mar Vol 122(1) 125-128

'Content', recall, is a cognate of 'intension' or 'meaning'. A major argument for the system of Sentence Management is that if one wishes to expand the range of an individuals' skills (behaviours), one can do no better than to adopt effective (ie algorithmic) practices to guide placements of inmates into activities based on actuarial models of useful relations which exists between skills, both positive and negative. One is unlikely to identify these other than through empirical analyses. These should identify where such skills will be naturally acquired and practised. There is now overwhelming evidence that behaviour is context specific. Given that conclusion, which is supported by social role expectations (see reviews of Attribution Theory), one is well advised to focus all attempts at behaviour engineering via correctional programmes and activities with this fully understood. Within the PROBE project at least, one has no alternative but to eschew psychological ie intensional (cognitive) processes because valid inference is logically unreliable within such non-extensional contexts. The work on Sentence Planning and Management represents work on the second phase of PROBE's development between 1990 and 1994. The work on Sentence Planning was a direct development of the original CRC recommendations. Sentence Management, was designed as an essential substrate, or support structure, for Sentence Planning.

'..there may be little or no direct introspective access to higher order cognitive processes. Ss are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.'

R. Nisbett & T. Wilson (1977)
Telling More Than We Can Know: Public Reports on Private Processes
Psychological-Review; 1977 Mar Vol 84(3) 231-259

Offering confidentiality may of course be an effective strategy to limit the indeterminacy of intensional idioms (see Postman 1959 on 'Serial Reproduction'). However, given recent work in logical analysis (Kripke 1971,1972; Donnellan 1966; Putnam 1973; Schwartz 1977) on Direct Reference, which suggests the social determination of extension, ie that what is inside the head has nothing to do with the establishment of meaning, confidentiality in a correctional context may prove to be self defeating. This is not to say that individuals should not be handled tactfully and with respect, just that an uncritical promise of confidentiality is likely to limit therapeutic efficacy if social reinforcement of behaviour is, as the above and Skinner's work suggest, paramount.

'Linguistic competence is not the ability to articulate antecedently determinate ideas, intensions, or meanings; nor is it the ability to reproduce the world in words. We have no such abilities. It consists, rather, in mastery of a complex social practice, an acquired capacity to conform to the mores of a linguistic community. It is neither more nor less than good linguistic behavior.'

Catherine Z. Elgin (1990)
Facts That Don't Matter
Meaning and method - Essays in Honor of Hilary Putnam

For an example of the futility of reliance of propositional attitudes within empirical psychological research, one can do no better than cite the review of Wicker (1969) on attitude-behaviour consistency:

'Insko and Schopler (1967) have suggested the possibility that much evidence showing a close relationship between verbal and overt behavioral responses has been obtained but never published because investigators and journal editors considered such findings 'unexciting' and 'not worthy of publication'. If such data exist, their publication is needed to correct the impression suggested by the present review that attitude-behavior inconsistency is the more common phenomenon.

The presently available evidence on attitude-behavior relationships does not seem to contradict conclusions by two early researchers in the area: LaPiere wrote in 1934:

'The questionnaire is cheap, easy, and mechanical. The study of human behavior is time consuming, intellectually fatiguing, and depends for its success upon the ability of the investigator. The former method gives quantitative results, the latter mainly qualitative. Quantitative measurements are quantitatively accurate; qualitative evaluations are always subject to the errors of human judgment. Yet it would seem far more worth while to make a shrewd guess regarding that which is essential than to accurately measure that which is likely to prove quite irrelevant (La Piere, 1934, p.237).

Corey, in 1937 wrote:

'It is impossible to say in advance of investigation whether the lack of relationship reported here between attitude questionnaire scores and overt behavior is generally true for measures of verbal opinion. Were that the case, the value of attitude scales and questionnaires would for most practical purposes be extremely slight. It would avail a teacher very little, for example, so to teach as to cause a change in scores on a questionnaire measuring attitude toward communism if these scores were in no way indicative of the behavior of his pupils.
It is difficult to devise techniques whereby certain types of behavior can be rather objectively estimated for the purpose of comparison with verbal opinions. Such studies despite their difficulty, would seem to be very much worthwhile. It is conceivable that our attitude testing program has gone far in the wrong direction. The available scales and techniques are almost too neat. The ease with which so-called attitudinal studies can be conducted is attractive but the implications are equivocal' (Corey, 1937, p.279).

Wicker concluded his paper 'Attitudes v. Actions' (1969) with the following:

'The present review provides little evidence to support the postulated existence of stable, underlying attitudes within the individual which influence both his verbal expressions and his actions. This suggests several implications for social science researchers.
First, caution must be exercised to avoid making the claim that a given study or set of studies of verbal attitudes, however well done, is socially significant merely because the attitude objects employed are socially significant. Most socially significant questions involve overt behavior, rather than peoples feelings, and the assumption that feelings are directly translated into actions has not been demonstrated. Casual examination of recent numbers of this and other like journals suggests that such caution has rarely been shown.
Second, research is needed on various postulated sources of influence on overt behavior. Once these variables are operationalised, their contribution and the contribution of attitudes the variance of overt behavior can be determined. Such research may lead to the identification of factors or kinds of factors which are consistent with better predictors of overt behavior than attitudes.
Finally, it is essential that researchers specify their conceptions of attitudes. Some may be interested only in verbal responses to attitude scales, in which case the question of attitude-behavior relationships is not particularly relevant or important. However, researchers who believe that assessing attitudes is an easy way to study overt social behaviors provide evidence that their verbal measures correspond to relevant behaviors. Should consistency not be demonstrated, the alternatives would seem to be to acknowledge that one's research deals only with verbal behavior or to abandon the attitude concept in favour of directly studying overt behavior.'

Allan W Wicker (1969)
Attitudes v. Actions: The relationship between Verbal
and Overt Responses to Attitude Objects.

The only study to shed further light on the conclusions of Wicker was the study by Ajzen & Fishbein (1977) which basically adds the caveat that the correspondence between what people say they do and what they actually do is improved if the questions become some specific and so constrained that to expect otherwise would be absurd. Alas, most of the questions we ask of inmates are not of such a nature:

'Examines research on the relation between attitude and behavior in light of the correspondence between attitudinal and behavioral entities. Such entities are defined by their target, action, context, and time elements. A review of available empirical research supports the contention that strong attitude-behavior relations are obtained only under high correspondence between at least the target and action elements of the attitudinal and behavioral entities. This conclusion is compared with the rather pessimistic assessment of the utility of the attitude concept found in much contemporary social psychological literature.

I. Ajzen & M. Fishbein
Attitude-behavior relations: A theoretical analysis & review of empirical research.
Psychological Bulletin 1977 Sep Vol 84(5) 888-918

finally, a relatively recent study:

'Assessed the effects of 2 kinds of introspection (focusing on attitudes and analyzing reasons for feelings) in 2 experiments with 191 undergraduates. In Exp I, Ss at a college dining hall analyzed the reasons for their attitudes toward different types of beverages, focused on their attitudes, or received no instructions to introspect. The attitude measure was reported liking for the beverages, while the behavioral measure was the amount of each beverage Ss drank. Exp II included the same conditions in a laboratory study in which the attitude object was a set of 5 puzzles. Reported interest in the puzzles and the proportion of puzzles Ss attempted were assessed. In both studies, analyzing reasons reduced attitude-behavior consistency relative to the correlations in the focusing and control conditions.'

Wilson T D. & Dunn D S.
Effects of introspection on attitude-behavior consistency:
Analyzing reasons versus focusing on feelings.
Journal of Experimental Social Psychology 1986 May Vol 22(3) 249-263

Recall Nisbett and Wilson's (1977) review of the relation between self-report and the actual controlling contingencies. Throughout all of these studies, it is well to bear in mind the austere logician's statement that:

'..the meaning of words are abstractions from the truth conditions of sentences that contain them.'

W.V.O. Quine (1981)
The Five Milestones of Empiricism: Theories and Things p.69

If such a line is accepted, intensionalist practices can serve no practical scientific purpose other than to distract from more fruitful processes of measuring, recording and contracting behaviour. That is, intensional practices may serve no more than to limit, through poor professional practices, what could be learned via extensional analysis of relations between classes of behaviours (e.g. frequencies of problem behaviour, and the joint frequencies of these classes with other classes of actions and events such as age and index offence.

Intensional contexts can be identified as follows:

'Chisholm proposes three independently operating criteria for Intentional sentences.
(1) A simple declarative sentence is Intentional if it uses a substantival expression - a name or a description - in such a way that neither the sentence nor its contradictory implies either that there is or that there isn't anything to which the substantival expression truly applies.
(2) Any noncompound sentence which contains a propositional clause...is Intentional provided that neither the sentence nor its contradictory implies either that the propositional clause is true or that it is false.
(3) If A and B are two names or descriptions designating the same thing or things, and sentence P differs from sentence Q only in having A where Q has B, then sentences P and Q are Intentional if the truth of one together with the truth that A and B are co-designative does not imply the truth of the other'
The going scheme of logic, the logic that both works and is generally supposed to suffice for all scientific discourse (and, some hold, all SIGNIFICANT discourse), is extensional. That is, the logic is blind to intensional distinctions; the intersubstitution of coextensive terms, regardless of their intensions, does not affect the truth value (truth or falsity) of the enclosing sentence. Moreover, the truth value of a complex sentence is always a function of the truth values of its component sentences.
The Intentionalist thesis of irreducibility is widely accepted, in one form or another, and there are two main reactions to the impasse: Behaviourism and Phenomenology. The behaviourist argues that since the Intentional idioms cannot be made to fit into the going framework of science, they must be abandoned, and the phenomena they are purported to describe are claimed to be chimerical.'

D. C. Dennett (1969)
Content and Consciousness p32.

The choice was clearly spelled out by Quine in 1960, but remains poorly appreciated:

'One may accept the Brentano thesis as showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention. My attitude, unlike Brentano's, is the second. To accept intentional usage at face value is, we saw, to postulate translation relations as somehow objectively valid though indeterminate in principle relative to the totality of speech dispositions. Such postulation promises little gain in scientific insight if there is no better ground for it than that the supposed translation relations are presupposed by the vernacular of semantics and intention.'

W. V. O Quine
The Double Standard: Flight from Intension
Word and Object (1960), p218-221

The alternative, methodologically incompatible approach of evidential behaviourism, is restricted to extensional, normative analysis and management of behaviour, drawing on natural inmate-environment interactions, i.e., behaviour with respect to day to day activities. This is eliminativist with respect to intensions (properties, meanings, senses or thoughts, Quine, 1960; 1990; 1992), not on the grounds that they comprise a body of pre-scientific 'folk' theoretical idioms (Stich 1983; Churchland 1989), but because such idioms violate the basic axiom of valid inference, namely Leibniz's Law:

for any objects x and y, if x is identical to y, then if x has a certain property F, so does y.

Symbolically:

(x)(y)[(x'y) (Fx Fy)].

This is the indiscernibility of identicals upon which all inference is premised. ("Things are the same as each other, of which one can be substituted for the other without loss of truth" - [Eadam sunt, quorum unum potest substitui alteri salva veritate].

'...it is useless to suggest, as some logicians have done, that the variable x may take as its values intensions of some sort. For if we admit intensions as possible values of our variables, we must abandon the principle of the indiscernibility of identicals, and then, because we have no clear criterion of identity, we shall be unable to say what we want to say about extensions.'

Problems of Intensionality
W. Kneale and M Kneale (1962)
The Development of Logic p.617
'The first-order predicate calculus is an extensional logic in which Leibniz's Law is taken as an axiomatic principle. Such a logic cannot admit 'intensional' or 'referentially opaque' predicates whose defining characteristic is that they flout that principle.'

U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil
'There is a counterpart in modern logic of the thesis of irreducibility. The language of physical and biological science is largely extensional. It can be formulated (approximately) in the familiar predicate calculus. The language of psychology, however, is intensional. For the moment it is good enough to think of an intensional sentence as one containing words for intentional attitudes such as belief.
Roughly what the counterpart thesis means is that important features of extensional, scientific language on which inference depends are not present in intensional sentences. In fact intensional words and sentences are precisely those expressions in which certain key forms of logical inference break down.'

R. J. Nelson (1992)
Naming and Reference p.40

Note, '..intensional words and sentences are precisely those expressions in which certain key forms of logical inference break down' and '..the language of psychology, however, is intensional'. Whilst it is clearly the case that folk psychology is largely concerned with properties, characteristics or qualities of individuals, their beliefs, desires, thoughts, feelings etc., it is also the case that this is now true of much of contemporary professional psychology (Fodor 1980). However, it may also be true that many contemporary psychologists are not aware of the full implications and quandaries implied by the this stance (Stich 1980). Whilst it has been persuasively argued (Quine 1951,1956) that quantification into intensional contexts is indeterminate, leading inevitably to 'indeterminacy of translation' (Quine 1960). Nelson (1992), a one time IBM senior mathematician goes on to point out:

'It is widely claimed today by philosophers of logic that intensional sentences cannot be equivalently rephrased or replaced by extensional sentences. Thus Brentano's thesis reflected in linguistic terms asserts that psychology cannot be framed in the extensional terminology of mathematics, physics or biology'.

ibid p.42.

This point has not only been made by logicians. In fact it has been a major, perhaps the major finding of research within Personality and Social Psychology since the 1950s. Here is how Ross and Nisbett (1991) put the matter:

'Finally, it should be noted that some commonplace statistical failings help sustain the dispositional bias. First, people are rather poor at detecting correlations of the modest size that underlie traits (Chapman and Chapman 1967, 1969; Kunda and Nisbett 1986; Nisbett and Ross 1980). Second, people have little appreciation of the relationship of sample size to evidence quality. In particular, they have little conception of the value of aggregated observations in making accurate predictions about trait-related behavior (Kahneman & Tversky 1973; Kunda & Nisbett 1986). The gaps in people's statistical abilities create a vacuum that the perceptual and cognitive biases rush in to fill.'

L. Ross and R. E. Nisbett (1991)
The Person and the Situation: Perspectives of Social Psychology
and within Cognitive Psychology, Agnoli & Krantz, 1989:
'A basic principle of probability is the conjunction rule, p(B) '' p(A&B). People violate this rule often, particularly when judgements of probability are based on intensional heuristics such as representativeness and availability. Through other probabilistic rules are obeyed with increasing frequency as people's levels of mathematical talent and training increase, the conjunction rule generally does not show such a correlation. We argue that this recalcitrance is not due to inescapable "natural assessments"; rather, it stems from the absence of generally useful problem-solving designs that bring extensional principles to bear on this class of problem. We predict that when helpful extensional strategies are made available, they should compete well with intensional heuristics. Two experiments were conducted, using as subjects adult women with little mathematical background. In Experiment I, brief training on concepts of algebra of sets, with examples of their use in solving problems, reduced conjunction-rule violations substantially, compared to a control group. Evidence from similarity judgements suggested that use of the representativeness heuristic was reduced by the training....
...We conclude that such intensional heuristics can be suppressed when alternative strategies are taught.
The development of formal thought does not culminate in adolescence as Piaget (1928) held; rather, it depends on education (Fong, Krantz, & Nisbett, 1986, Nisbett, Fong, Lehmann & Cheng 1987) and may continue throughout adulthood. Probabilistic reasoning has been an especially useful domain in which to study the impact of training in adulthood on formal thought. Probabilistic principles are cultural inventions at most a few centuries old (Hacking 1975).....
Tversky and Kahneman (1983) focused on processes in which people substitute intensional for extensional thinking. In the latter mode, concepts are represented mentally in the same way as sets, hence, rules of logic and probability are followed in the main. By contrast, intensional thinking represents concepts by prototypes, exemplars, or relations to other concepts (Rosch, 1978, Smith & Medlin 1981). Processing is affected strongly by imaginability of prototypes, availability of exemplars, etc., and its results are not constrained as strongly by logical relations. A prime example is the representativeness heuristic (Kahneman & Tversky 1972), in which probability of a outcome is judged in terms of the similarity of that outcome to a prototype.
Tversky and Kahneman (1983) drew far reaching conclusions from the fact that, in most of their tests, the prevalence of conjunction errors was not affected by statistical education. They developed the concept of "natural assessment", a computation that is 'routinely carried out as part of the perception of events and the comprehension of messages......even in the absence of a specific task set.' They defined a "judgmental heuristic" as a 'strategy that relies on a natural assessment to produce an estimation or a prediction.' They compared such mechanisms to perceptual computations, and cognitive errors to perceptual illusions. In their view, people well trained in mathematics nonetheless perform natural assessments automatically. The results of these mental computations strongly influence probability judgement. Therefore, statistics courses presumably affect probability judgements, in problems such as "Linda," no more than geometry courses affect geometric visual illusions, i.e., scarcely at all.'

Agnoli & Krantz (1989)
Suppressing Natural Heuristics by Formal Instruction:
The Case of the Conjunction Fallacy [my emphasis]
Cognitive Psychology 21, 515-550 (1989)
 

8. Relational Technology and Behaviour Profiling

The basic unit of is an observation sentence or statement which to have any scientific utility must but true or false of something, ie it must have a truth value. What it is that is held to be true or false is the statement itself. Logically, the statement may be conceived as a relation of varying arity or order. The number of places comprising a relation of order zero is one, which is often referred to as a property. A relation proper, is always a predicate with at least 2 places (for example, 'greater than', or 'father of' but it could be any ordered pair such as 'N12345' and '18/07/60" where the first place of the relation is an identifier of some individual and the second is a birth date. All that is important is that the relation of whatever order can be said to have a truth value, for example 'N12345" AND "18/07/60" is true if indeed that is the number allocated to the individual whose birthdate is 18 July 1960, it is false if his birthdate is not that date, or if it is not his number.

Complex Relational Data Base Management systems support extensive analysis and management basically using no more than simple truth functions, AND, OR, NOT, IF-THEN, EQUALS, THERE IS, and FOR ALL (which can be reduced to just one truth function such as NAND in various combinations), and such technology is now widely available in commercial packages. What is fundamentally important in making use of such systems is the classes one uses as a basis for ones analysis.

This simply comes down to one rule. In science, one does not use properties, or one place predicates. The minimum is relations. Only with relations can one identify lawful relations which can be expressed mathematically, because relations between classes are the building blocks of science. Properties are intensions, and as we have seen throughout this paper, intensions can not be reliably regimented within the predicate calculus as they 1) are resistant to substitutivity of identity salva veritate, and 2) resist 'quantifying in', conditions which would appear to a sine qua non for valid inference.

Unfortunately, and quite devastatingly for those who would have one believe that a science of psychology (one which includes or makes use of the idioms of propositional attitude) is possible, these constraints would seem, on the basis of analyses provided by Quine (1956) and Davidson (1970;1973;1974) to render such an enterprise impossible, since the very language of the 'psychological' is intensional, the indeterminacy of which may, it is suggested, account for the low correlations generally reported in the psychological literature, and which is more traditionally attributed to measurement error.

The logical model for the PROBE/Sentence Management system is outlined in the file regimes.pdf. Technically it takes the form of a relational Database Management System supported by standard descriptive statistics. Within that Data Base Management System, 4th Generation Procedural Query Language (4-GL) routines, written by those maintaining the system, allow data to be ordered by classes and analysed by their frequencies, joint frequencies, contingencies and correlations.

It is the thesis of this paper, and the objective of the Sentence Management system, is that an individual's membership of classes of behaviour is fundamental to all effective behaviour management. Decisions about behaviour should be explicitly guided by recorded class memberships and their relations. To the extent that the idioms of propositional attitude intrude into the above system, effective management will be impaired, largely because of the error variance they contribute.

There must be a professionally managed system specifically designed to measure and record behaviour throughout the extent of the regime, and throughout an inmate's time in custody. On the basis of research and logical analyses reviewed in this paper, that data needs to be in the form of co-operation with the demands of routines and activities, analysed from the extensional stance.

The PROBE/Sentence Management system is designed as such a professional service in assessment and reporting, and can be made available to all staff tasked with the supervision of inmate behaviour whilst they are in custody and under supervision after release.

http://www.longley.demon.co.uk/

 

Bibliography:

A. On The Nature of Deductive Inference & Algorithms

1. Abraham F D A Visual Introduction to Dynamical Systems Theory for Psychology Aerial Press 1990

2. Church A A note on the Entscheidungproblem J. Symb. Log. 1 (1936) 40-41

3. Cherniak C. Minimal Rationality MIT Press. Bradford Books 1986

4. Codd E F A relational model of data for large shared data banks Comm ACM 13, 1970, 377-387

5. Frege G. Begriffsschrift (1879) In Heijenhoort (Ed) From Frege to Godel: A Source Book in Mathematical Logic Harvard University Press 1966

6 .Gardarin G & Valduriez P Relational Databases and Knowledge Bases Addison Wesley 1989

7. Gentzen G Investigations into logical deduction. In M. E. Szabo (Ed. & Trans.) The Collected Papers of Gerhard Gentzen Amsterdam: North-Holland 1969

8. Gray P Logic, Algebra and Databases Ellis Horwood Limited 1984

9. Genesereth M R & Nilsson N J Logical Foundations of Artificial Intelligence Morgan Kaufmann Publishers Inc. 1988

10. Hilbert D & Ackermann W. Principles of Mathematical Logic Chelsea Publishing Company 1950

11. Hodges W LOGIC: An Introduction to Elementary Logic Penguin Books, London 1991

12. Johnson-Laird P N Human and Computer Reasoning. Trends in Neurosciences; 1985 Feb Vol 8(2) 54-57

13. Johnson-Laird P N & Byrne R M Precis of Deduction Behavioral and Brain Sciences; 1993 Jun Vol 16(2) 323-380

14. Kleene S C Introduction to Metamathematics Amsterdam:North-Holland 1952

15. Kneale K & Kneale M The Development of Logic Cambridge University Press 1962

16. Post E L Finite combinatory process - Formulation I. J. Symb. Log. 1 (1936) 103-105

17. Prawitz D Gentzen's Analysis of First-Order Proofs in R. I. G. Hughes (ed) A Philosophical Companion to First-Order Logic Hackett Publishing Co. 1993

18. Rips L J Cognitive Processes in Propositional Reasoning Psych. Rev. 1990,90,1,38-71

19. Robinson J A Logic: Form and Function, the Mechanisation of Deductive Reasoning Edinburgh: Edinburgh University Press 1979

20. Shinghal R Formal Concepts in Artificial Intelligence: Fundamentals Chapman & Hall Computing, London 1992

21. Tennant N W Natural Logic Edinburgh University Press 1990

22. Turing A M On Computable numbers, with an application to the Entscheidungsproblem P. Lond. Math. Soc. (2) 42 (1936-7) 230-265

23. Wos L, Overbeek R, Ewing L & Boyle J Automated Reasoning: Introduction and Applications McGraw-Hill, London, 1993

B. On The Nature of Inductive Inference & Heuristics

1. Methodology

24. Andrews D A, Zinger I, Hoge R D, Bonta J, Gendreau P & Cullen F T Does Correctional Treatment Work? Criminology 28,3 1990

25. Andrews D A, Zinger I, Hoge R D, Bonta J, Gendreau P & Cullen F T . A Human Science Approach or More Punishments and Pessimism: A Rejoinder to Lab and Whitehead Criminology, 28,3 1990 419-429

26. Bakan D The Test of Significance in Psychological Research Psychological Bulletin,1966,66,6,423-437

27. Bolles R C The Difference Between Statistical Hypotheses and Scientific Hypotheses Psychological Reports,1962,11,639-645

28. Cohen J Things I Have Learned (So Far) American Psychologist December 1990, pp 1304-1312

29. Cohen J A Power Primer Quantitative Methods in Psychology: Psychological Bulletin 1992,112,1,155-159

30. Dar R Another Look at Meehl, Lakatos, and the Scientific Practices of Psychologists American Psychologist, 1987, February

31. Guttman L What is Not What in Statistics. The Statistician, Vol 26 No 2, 1977 p. 81-107

32. Guttman L The Illogic of Statistical Inference for Cumulative Science. Applied Stochastic Models and Data Analysis Vol 1, 3-10, 1985

33. Lykken D T Statistical Significance in Psychological Research Psychological Bulletin 1968,70,3,151-159

34. McDougall C, Barnett R M, Ashurst B and Willis B Cognitive Control of Anger in McGurk B J, Thornton D M, and Williams M (eds) Applying Psychology to Imprisonment, HMSO 1987

35. Martinson R What Works ? - Questions and Answers about Prison Reform, The Public Interest 35,22-54 1974

36. Martinson R California Research at the Crossroads Crime & Delinquency, April 1976, 63-73

37. Meehl P E Theory Testing in Psychology and Physics: A Methodological Paradox. P 111-112. Philosophy of Science, June 1967

38. Meehl P E Theoretical Risks and Tabular Asterisks: Sir Karl and Sir Ronald and The Slow Progress of Soft Psychology. J Consulting and Clinical Psychology 1978,45,4,p806-34

39. Meehl P E What Social Scientists Don't Understand - in Metatheory in Social Science Eds D. W. Fiske & R. A. Shweder The University of Chicargo Press, London 1986

40. Porporino F, Fabiano L and Robinson Focusing on Successful Reintegration: Cognitive Skills Training for Offenders submitted for publication to the Scandinavian Criminal Law Review July 1991

41. Rozeboom W M The Fallacy of The Null Hypothesis Significance Test Psychological Bulletin 1960, 57,5,416-428

42. Wainer H Estimating Coefficients in Linear Models: It Don't Make No Nevermind Psychological Bulletin, 1976, 63,2 213-217

43. Wainer H On the Sensitivity of Regression and Regressors Psychological Bulletin 85,2, 267-273

2. Inductive Reasoning as Heuristics

44. Agnoli F & Krantz D. H. Suppressing Natural Heuristics by Formal Instruction: The Case of the Conjunction Fallacy Cognitive Psychology 21, 515-550, 1989

45. Cooke R M Experts In Uncertainty Opinion and Subjective Probability in Science Oxford University Press 1991

46. Derthick M Mundane Reasoning by Settling on a Plausible Model Artificial Intelligence 46,1990,107-157

47. Eddy D M Probabilistic Reasoning in Clinical Medicine: Problems and Opportunities In Kahneman, Tversky and Solvic (Eds) Judgment Under Uncertainty: Heuristics and Biases Cambridge University Press 1982

48. Fong G T, Lurigio A J & Stalans L J Improving Probation Decisions Through Statistical Training Criminal Justice and Behavior,17,3,1990,370-388

49. Fong G T & Nisbett R E Immediate and delayed transfer of training effects in Statistical reasoning. Journal of Experimental Psychology General; 1991,120(1) 34-45

50. Fotheringhame J Transfer of training: A field investigation of youth training. Journal of Occupational Psychology 1984 Sep Vol 57(3) 239-248

51. Fotheringhame J Transfer of training: A field study of some training methods. Journal of Occupational Psychology 1986 Mar Vol 59(1) 59-71

52. Gardner H The Mind's New Science: A History of the Cognitive Revolution Basica Books 1987

53. Gigerenzer G & Murray D J Cognition as Intuitive Statistics Hillsdale, NJ: Erlbaum, 1987

54. Gigerenzer G, Swijtink Z, Porter T, Datson L, Beatty J & Kruger L The Empire of Chance Cambridge University Press, 1989

55. Gigerenzer G The Superego, the Ego, and the Id in Statistical Reasoning Ch. 11 A Handbook for Data Analysis in the Behaviour Sciences Methodological Issues: Eds G. Keren & C Lewis Lawrence Erlbaum Associates 1993

56. Gladstone R Teaching for transfer versus formal discipline American-Psychologist; 1989 Aug Vol 44(8) 1159

57. Gluck M A & Bower G. H. From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology General (1988) Sep Vol 117(3) 227-247

58. Gluck M A & Bower G H Component and pattern information in adaptive networks. Journal of Experimental Psychology General; (1990) Mar Vol 119 (1) 105-9

59. Hecht-Nielsen R NEUROCOMPUTING Addison Wesley 1990

60. Holland J H, Holyoak K J, Nisbett R E & Thagard P R Induction:Processes of Inference, Learning, and Discovery Bradford Books: MIT Press 1986

61. Johnson-Laird P N THE COMPUTER AND THE MIND: An Introduction to Cognitve Science Fontana 1988

62. Johnson-Laird P N & Byrne R M Deduction Lawrence Erlbaum Associates 1991

63. Kagan J The meanings of personality predicates American Psychologist, 1988 Aug Vol 43(8) 614-620

64. Kohonen T Self-Organisation and Associative Memory Springer-Verlag 1988

65. Kosko B Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence Prentice-Hall 1992

66. Kruger L, Gigerenzer G & Morgan M S The Probabilistic Revolution Vol 1,2 Bradford Books 1987

67. Lehman D R & & Nisbett R E A Longitudinal Study of the Effects of Undergraduate Training on Reasoning. Developmental Psychology,1990,26, 6,952-960

68. Myers C Core skills and transfer in the youth training schemes: A field study of trainee motor mechanics. Journal of Organizational Behavior 1992 Nov Vol 13(6) 625-632

69. Minsky M L & Papert S A Perceptrons: An Introduction to Computational Geometry MIT Press 1990

70. Nisbett R E & Wilson T D Telling more than we can know: Verbal reports on mental processes Psychological-Review; 1977 Mar Vol 84(3) 231-259

71. Nisbett R E & Ross L Human Inference: Strategies and Shortcomings of Social Judgment Century Psychology Series, Prentice-Hall (1980)

72. Nisbett R E & Krantz D H The Use of Statistical Heuristics in Everyday Inductive Reasoning Psychological Review, 1983, 90, 4, 339-363

73. Nisbett R E, Fong G T, Lehman D R & Cheng P W Teaching Reasoning Science v238, 1987 pp.625-631

74. Ploger D & Wilson M Statistical reasoning: What is the role of inferential rule training? Comment on Fong and Nisbett. Journal of Experimental Psychology General; 1991 Jun Vol 120(2) 213-214

75. Popper K R The Logic of Scientific Discovery Routledge, Kegan Paul 1959

76. Popper K R Truth, Rationality, and the Growth of Knowledge Ch. 10, p 217-8 Conjectures and Refutations RKP London 1965

77. Reeves L M & Weisberg R W Abstract versus concrete information as the basis for transfer in problem solving: Comment on Fong and Nisbett (1991). Journal of Experimental Psychology General; 1993 Mar Vol 122(1) 125-128

78. Rescorla R A & Wagner A R A Theory of Classical Conditioning: variations in the effectiveness of reinforcement and Nonreinforcement. In Classical Conditioning II: Current Theory and Research (Black & Prokasy) pp. 64-99. Appleton Century Crofts 1971.

79. Rescorla R A Pavlovian Conditioning: It's Not What You Think It Is. American Psychologist, March 1988.

80. Ross L & Nisbett R E The Person and The Situation: Perspectives of Social Psychology McGraw Hill 1991

81. Rummelhart D E & McClelland J L Parallel Distributed Processing:Explorations in the Microstructure of Cognition Vol 1: Foundations MIT Press 1986

82. Savage L The Foundations of Statistics John Wiley & Sons 1954

83.Shafir E & Tversky A Thinking Through Uncertainty: Nonconsequential Reasoning and Choice Cognitive Psychology 24,449-474, 1992

84. Smith E E, Langston C & Nisbett R E The case for rules in reasoning. Cognitive Science; 1992 Jan-Mar Vol 16(1) 1-40

85. Stich S The Fragmentation of Reason Bradford Books 1990

86. Sutherland S IRRATIONALITY: The Enemy Within Constable: London 1992

87. Tversky A & Kahneman D Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment Psychological Review v90(4) 1983

88. Wason W C Reasoning in New Horizons in Psychology, Penguin Books 1966

89. Wason W C & Johnson-Laird P Psychology of Reasoning London:Batsford 1972

C. On Clinical vs. Actuarial Judgment

90. Arkes H R & Hammond K R Judgment and Decision Making: An interdisciplinary reader Cambridge University Press 1986

91. Dawes R.M The Robust beauty of improper linear models in decision making American Psychologist, 1979 34,571-582

92. Dawes R M Rational Choice in an Uncertain World Orlando: Harcourt, Brace, Jovanovich 1988

93. Dawes R M, Faust D & Meehl P E Clinical Versus Actuarial Judgement Science v243, pp 1668-1674 1989

94. Elstein A S Clinical judgment: Psychological research and medical practice. Science; 1976 Nov Vol 194(4266) 696-700

95. Einhorn H J & Hogarth R M Behavioral decision theory: Processes of judgment and choice Annual Review of Psychology (1981), 32, 53-88

96. Faust D Data integration in legal evaluations: Can clinicians deliver on their premises? Behavioral Sciences and the Law; 1989 Fal Vol 7(4) 469-483

97. Gigerenzer G How to Make Cognitive Illusions Disappear: Beyond "Heuristics and Biases" in European Review of Social Psychology, Volume 2 eds W Stroebe & M Hewstone 1991, Ch 4 pp. 83-115

98. Goldberg L R Simple models or simple processes? Some research on clinical judgments American Psychologist,1968,23(7) p.483-496

99. Kahneman D, Slovic P & Tversky A Judgment Under Uncertainty: Heuristics and Biases Cambridge University Press 1982

100. Kyllonen P C & Christal R E Reasoning Ability Is (Little More Than) Working-Memory Capacity?! Intelligence 14, 389-433 1990

101. Lundberg G A Case Studies vs. Statistical Methods - An Issue Based on Misunderstanding. Sociometry v4 pp379-83 1941

102. Meehl P E Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence University of Minnesota Press, Minneapolis. 1954

103. Meehl P E When Shall We Use Our Heads Instead of the Formula? PSYCHODIAGNOSIS: Collected Papers 1971

104. Meehl P E Causes and effects of my disturbing little book J Person. Assess. 50,370-5,1986

105. Oskamp S Overconfidence in case-study judgments J. Consult. Psychol. (1965), 29, 261-265

106. Sarbin T R Clinical Psychology - Art or Science? Psychometrica v6 pp391-400 (1941)

D. On the contrast Between the Extensional and Intensional Analysis

107. Church A Intensional Isomorphism and Identity of Belief in Salmon N & Soames S (eds) Propositions and Attitudes Oxford University Press 1980

108. Churchland P M A Neurocomputational Perspective The Nature of Mind and The Structure of Science Bradford Books 1989

109. Dahlbom B Dennett and His Critics Blackwell, Oxford 1993

110. Davidson D Actions and Events Oxford University Press 1980

111. Deakin J F W & Longley D Naloxone Enhances Neophobia British Journal of Pharmacology, April 1981

112. Devitt M. Meanings just ain't in the head In Meaning and Method:Essays in Honor of Hilary Putnam Ed G. Boolos Cambridge University Press 1990

113. Dretske F I The Intentionality of Cognitive States Midwest Studies in Philosophy 5,281-294 1980

114. Elgin C Z Facts That Don't Matter In Meaning and Method: Essays in Honor of Hilary Putnam Ed G. Boolos Cambridge University Press 1990

115. Eysenck H J & Eysenck S Introduction to: Using Personality to Individualize Instruction - J A Wakefield JR Edits, San Diego, 1978

116. Fodor J A Methodological Solipsism Considered as a Research Strategy for Cognitive Psychology Behavioral-and-Brain-Sciences; 1980 Mar Vol 3(1) 63-109

117. Frege G Function and Concept in Geach P and Black M (eds) Translations from the Philosophical Writings of Gottlob Frege 3rd edition : Basil Blackwell Oxford 1980

118. Frege G On Concept and Object in Geach P and Black M (eds) Translations from the Philosophical Writings of Gottlob Frege 3rd edition : Basil Blackwell Oxford 1980

119. Frege G On Sense and Meaning in Geach P and Black M (eds) Translations from the Philosophical Writings of Gottlob Frege 3rd edition : Basil Blackwell Oxford 1980

120. Frege G What is a Function? in Geach P and Black M (eds) Translations from the Philosophical Writings of Gottlob Frege 3rd edition : Basil Blackwell Oxford 1980

121. Kripke S A A Puzzle About Belief in Salmon N & Soames S (eds) Propositions and Attitudes Oxford University Press 1980

122. Longley D The Interaction of Brain Monoamines and Neuropeptides In the Control of Operant Behaviour. Unsubmitted PhD Thesis: National Institute for Medical Research London/Institute of Neurology University of London 1983

123. Nelson R J Naming and Reference Routledge, London 1992

124. Putnam H The Meaning of Meaning In Mind, Language and Reality: Philosophical Papers Vol 2 Cambridge University Press 1975

125. Putnam H Representation and Reality Bradford Books 1988

126. Quine W V O Quantifiers and Propositional Attitudes (1956) In The Ways of Paradox and Other Essays Harvard University Press 1966, 1972

127. Quine W V O The Scope and Language of Science (1954) The Ways of Paradox and Other Essays Harvard University Press 1966, 1972

128. Quine W V O Word and Object MIT Press 1960

129. Quine W V O What Is It All About ? American Scholar, (1980) 50,43-54

130. Quine W V O The Pursuit of Truth Harvard University Press 1990,1992

131. Ramsey W, Stich S & Garon J Connectionism, eliminativism, an the future of Folk Psychology in Greenwood J D (Ed) The Future of Folk Psychology Cambridge University Press 1991

132. Ryle G The Concept of Mind Penguin 1949

133. Schnaitter R Knowledge as Action: The Epistemology of Radical Behaviorism in B.F. Skinner Consensus and Controversy Eds. S. Modgil & C. Modgil Falmer Press 1987

134. Skinner B F The Operational Analysis of Psychological Terms Psychological Review, (1945) 45, 270-77

135. Skinner B F Beyond Freedom and Dignity New York, Knopf 1971

136. Skinner B F About Behaviourism New York, Knopf 1974

136. Stich S From Folk Psychology to Cognitive Science: The Case Against Belief Bradford Books 1983

137. Teasdale J D & Russell L M Differential effects of induced mood on the recall of positive, negative and neutral words. British Journal of Clinical Psychology,1983 Sep Vol 22(3) 163-171

Footnote:

Extract from an Open University Third Level Course: Professional Judgment and Decision Making:

'There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one. When you are pushing 90 investigations, predicting everything from the outcome of football games to the diagnosis of liver disease and when you can hardly come up with a half dozen studies showing even a weak tendency in favour of the clinician, it is time to draw a practical conclusion, whatever theoretical differences may still be disputed.'

Meehl 1986, pp373-4

COURSE TEXT:

JACK: Meehl's study set off, or at least much inflamed, the 'statistical versus clinical judgment' controversy, which has rumbled on ever since, though it's somewhat less fashionable than it was.

PENELOPE: Why?

JACK: Cynically, because the human judges didn't like the results and made sure that they or their authors didn't get the funding, circulation or promotion they deserved. Closed shops (as most professions are to some extent) are not likely to vote for what they see as de-skilling, and alternative approaches that showed more respect for the human judge became fashionable and fund worthy (especially the expert systems we shall meet in the session after next). Uncynically, the methodological problems in policy-capturing research are real: it IS difficult to establish the external validity of the results.'

Page 63 Volume 1 Introductory Text 2

E-Mail: David@longley.demon.co.uk