The gist of my recent post entitled “Assumptions” was that many of the views we have of others are based on conjecture until we actually obtain a fuller picture based on knowledge. In the meantime, polarisation occurs on the basis of what it is thought people believe or what is thought to be the reason behind something. We make assumptions on the basis of our own cultural norms, received wisdom and our previous experience. Consider that recurrent scenario in my post last month on my stay in Tanzania: I felt unwelcome in certain towns because people would stare at me and fail to greet me. Only later did I learn that in Tanzanian culture, the visitor is expected to introduce himself to his hosts and not vice versa. Consider too the case of someone familiar with the evangelist Paul’s words about a woman covering her head and the general status of woman; without any further information, it is likely that this person will assume that a Muslim woman covers her head for precisely the same reason.
So here I am thinking about the ways in which individuals could counter this tendency to assume. How could that commentator in The Independent, Deborah Orr, move on from her current stance which appears to be based on assumption? She is by no means alone in her approach. I had a friend who was once sitting in the park with his wife, who happened to be wearing nikab. After a short while a man approached them and started shouting abuse at my friend, chastising him for making his wife dress like that and for being a misogynist. This friend of mine was startled because it was his wife who insisted on dressing in that manner. The assumption that he had forced his wife to veil her face was in fact completely false. Another friend of mine had to take his young daughter to the doctors a couple of years ago. Just before he was about to walk out of his flat, this little girl of five insisted on pulling on a headscarf, because she wanted to behave like her mother as daughters often do. My friend tried to explain that it was not necessary, but his daughter began to bawl tears, and so he gave in. Half an hour later at the surgery, my friend was subjected to unpleasant stares and words, because the other patients in the waiting room assumed that he had forced that poor little girl to shroud her infant locks.
See, nobody is talking; nobody is delving and asking questions. It is easier to make assumptions. But the result is often the rise of misunderstandings. Given the common spectre of English reserve which prevents us from engaging in conversation with total strangers, how are we going to overcome misconceptions? Perhaps a good place to start would be with those social commentators, those authors of the daily press, actually doing some research prior to publishing their thoughts in public. In other words, speaking to those they are writing about, doing some background reading and testing their assumptions. We expect this kind of behaviour from social scientists, so why should we expect anything less from the media?
When I was studying Development a decade ago, it was taken for granted that there were certain analytical methods and techniques used to support both research and policy development. We would look at the Social Survey, at Participant Observation, at Direct Observation and at Participatory Rural Appraisal, and we would consider the potential for bias in each method. While recognising that potential, there was no question that these techniques would be abandoned altogether in favour of acting on the basis of assumption. So why would a writer consider it acceptable to call upon Muslims to behave in a certain manner without ever undertaking any sort of engagement with the subject of their opinions? The same goes for Muslims who have certain very polarised and exaggerated views about non-Muslims. If the opinions of any of us – as social commentators, however casual this role may be – are to be of any worth, something of substance has to lie behind it. We cannot achieve that without exerting effort in some way. We have to be intelligent and interact with one another on the basis of reality – and certainly not on the basis of amplified caricatures. Justifications for wearing nikab are sometimes as wild exaggerations of the lot of the non-Muslim “western” woman as the offensive representations of the veiled woman.
So where now? We could start by looking at the means of undertaking research. Those I am most familiar with obviously have a Development slant, but even this is a fair place to start if we are to move away from the dialogue of assumption. Take the Social Survey: this is a systematic collection of data makes that is extractive, rather than participatory. It tends to be expensive and is also widely thought of as inefficient. They are useful, according to Pratt and Loizos, for “obtaining factual or attitudinal information about large populations…” Yet the same authors admit that the information collected “could be obtained more easily and more cheaply” by other means. Questionnaires are used in surveys to gain information on the subject from an informant and are affected by various types of bias. Not only are there the problems of the surveyor’s attitudinal bias, but there are also more subtle issues involving inaccuracies and mistakes in questionnaire planning which may lead to bias. For example, when choosing the language to prepare the survey in, or when considering the translator’s role, words with double meanings may cause difficulties, as would two words that sound the same when spoken.
In Choosing research methods: Data Collection for Development Workers, Pratt and Loizos find that a survey is limited in use when it comes to dealing with sensitive or private information, and state that “The survey is the best method for certain specific tasks, but not for others.” Results may be reliable at times, but not when questions are asked about political loyalties or debt, for example. As a result, the potential for bias in Social Surveys is quite great. Casley and Lury, writing in Data Collection in Developing Countries note that there is a danger that the researcher, “partly because of his involvement with the problem, thinks that certain aspects of the subject are self-evident”. The potential for researcher bias is tied together here by the constraints of time and money, and by the fact resulting from this that many surveyors may look for short cuts in getting their work done. An inaccuracy in recording data may for a moment seem insignificant, but could bias the findings, because “An individual response error is the difference between the recorded observation and the individual true value.” As to the role played by the participant… Casley and Lury say that surveys often make the mistake of asking questions that cannot be answered accurately. “This may be because either the answer was never known or too much is expected of the respondent’s memory.” A further point, and one from which all research methods may suffer initially, is the reluctance of a respondent to take part, answer honestly or simply reveal an answer to a certain question, especially where the respondent is suspicious of the purpose of the survey. We will all note, I am sure, that a number of headlines recently have been based on surveys and polls of small numbers of Muslims and there will be many of us who would be wary of seeing more research conducted in that manner, but I do think there is some benefit of using them in conjunction with other methods.
Participant observation (PO) and ethnography are basically one concept, in which information is collected slowly and more casually than the previous directed approach. According to Pratt and Loizos, PO “entails the researcher becoming resident in a community for a period of many months and observing the normal daily lives of its members.” Consider that position of relative advantage encountered by the convert to Islam who is able to immerse himself within his adopted community for years, gaining an insight inaccessible to his non-Muslim countrymen. In the case of PO, observation takes the place of formal questions, whilst indirect questions replace structured questionnaires. Because it is a slow method which involves constant attention on the researchers’ part, it provides an in depth picture of the community, avoiding the “seasonal biases of so many research methods, as well as fragmentary insights produced by short visits when visitors may be given special treatment.”
Writing in What’s wrong with Ethnography?, Hammersley notes that the adoption of an ethnographic or participant approach is often due to the fact “that by entering into close relatively long-term contact with people in their everyday lives we can come to understand their beliefs and behaviour more accurately, in a way that would not be possible by means of any other approach.” As a critique of survey and experimental research, ethnography serves to produce a less biased research approach. Since a definite structure has been removed, there is less room for researchers to impose their own assumptions upon the social world they study, and settings set up by researchers in the structuring of the questions they ask are displaced. Hammersley notes that to only rely upon what people say they do, “without also observing what they do, is to neglect the complex relationship between attitudes and behaviour”. Of course it is self-evident that direct observation would suffer by only observing and not asking.
The residency element of PO has two sides which may affect bias. One is that it allows contact with groups, such as the poor, with whom contact might be difficult during a short research visit. Yet, whilst rapport building may counter bias, over-rapport may bias the research in other ways. Objectivity may be lost over time as researchers settle into their environment and, as Hammersley points out, some become so involved in the study that they lack the interest for more general questions. A further problem is that it becomes difficult to cross-check findings due to the length of times spent in the field. In Principles and problems of participant observation, Jackson points out that PO is regarded as an approach, not as a technique, which suggests that the potential for bias depends on the way it is carried out. Pratt and Loizos conclude that PO is only as good as the observer. Therefore, a counter-balance to bias may be to always consider the research’s relevance. Hammersley notes that most researchers are aware of the importance of relevance, but, he says, “the question should be addressed: relevance to what and for whom?”
Direct observation is the most basic of all of the main field research methodologies, which involves, as its names suggests, researchers simply using their eyes and ears to take in the activities of the environment around them. In this way, it is obviously very easy to carry out (which says nothing of the interpretation of the collected data), it is relatively inexpensive and can be an extremely useful tool. Used in isolation, however, its use is limited, for researchers are bound by a number of constraints. To start with, observation without contact can lead to misinterpretations on the researchers’ part. Facial expressions, for example, tend to mean different things across cultures, as I learned from my experience in Tanzania. Another problem is that researchers may be looking for certain things, neglecting or missing more mundane aspects in the process. More obviously, people’s behaviour changes when they know that they are being observed. As Mosse points out for PRA, but which is equally valid here, “Many communities do not habitually demonstrate poverty to outsiders…” Due to these factors, PO is more often used as a secondary resource to confirm or critique the findings of primary or published research. As well as only presenting a snapshot of the area studied, the method may be criticised because it relies almost wholly on the researchers’ objectivity; objectivity which is difficult to master unless the researchers understand the culture in which they work. As a result, PO faces the often unintentional bias of researchers as well as a bias created by local people who may unconsciously change their activities in the presence of an observer. One response to the latter is to carry out discrete or secret observation, but such activities’ ethics are highly controversial. The potential for bias of this method makes its use in isolation rather unviable.
Developed in late 1980s from the popular concept of sustainable development and the emphasis on indigenous knowledge, Participatory Rural Appraisal (PRA) became common as a starting point for field research and development planning. Writing in Authority, Gender and knowledge: Theoretical Reflections on the Practice of Participatory Rural Appraisal Mosse believes that it is significantly due to “their use in generating information at the community level directly with members of the community.” Emphasis was put on the idea of information sharing, which would counter the idea that social scientists using other techniques have a predatory relationship with their objects of study. Here, rather than relying on a researcher asking a set of directed questions, researchers merely stimulate discussion and direct communities to communicate using techniques such as modelling and mapping. By using PRA, it is normal that the research agenda is determined by the communities themselves.
Its use for collecting information quickly may be one of its appeals, but Okali et al. point out in Farmer Participatory Research: Rhetoric and Reality that the methods employed by PRA appeal to people with little knowledge of the local language. Because it is a participatory approach which relies upon indigenous knowledge, it is often seen as an easy method of research, but it may actually “require much more expertise than is commonly acknowledged.” By adopting the humbleness approach, the gap between the researcher and the community is supposed to be lessened, with an establishment of a kind of rapport. In this way, the potential for bias, in theory, is reduced. Yet, in reality Mosse sees that this actually represents “the set of assumptions that outsiders have about the ‘accessibility’ of villagers and the likelihood of effective communication with them.” Mosse is overall very critical of PRA since its methods may not attract participation at all, but confuse, for what may be perceived as informal by researchers come from their own cultural perspective.
Beyond these factors, the main areas of bias are brought together on the basis of community involvement. Both Mosse and Okali et al. point out that communities show “significant levels of differentiation in regard to education, wealth and access to and control of resources”. Bias stems from the fact that there will be some members of communities who do not want to take part, while more may be unable to do so due to the pressures of time and work. Mosse notes that other factors influence participation beyond the limitations of available time. He wrote that “Gender, age, education and kinship all influence participation in PRAs.” As with the other research methods, the potential for bias is ever present where local people are suspicious of the purpose of the research.
While the idea of participant involvement aims to offset this, Mosse argues that community events centre on the general and not on the particular, which is the opposite of what methods such as the social survey achieve. This suggests that the interest is not in absolute detail, which in turn suggests that a bias of generalisation is allowed for. In collective events, it is the ‘official view’ of the community that is presented. He reiterates Cohen’s 1989 view that “These ‘rhetorical expressions of integrity of the community’ are not to be mistaken for the absence of distinct and perhaps conflicting interests.” It is the specific, according to Mosse, that is of greatest importance in development research, and I would apply the same significant to the area of inter-faith dialogue. Chambers. however, argues that “Specialisation prevents the case study which sees life from the point of view of the rural poor themselves”. The last main fault of PRA which allows for bias, are the researchers themselves. Whilst their ability to enhance quieter voices may be an advantage to the research, Mosse says that “the very presence of development workers alters the balance of power.” Therefore, he says, “not all potential biases in PRA are attributable to the community and the way it projects itself; many also come from the investigating team itself.”
If we bare in mind all of the shortcomings and potential for bias in the various research techniques, we would surely still be in a much better position having utilised such techniques than to march forward on the basis of mere assumption. Let us not delude ourselves into thinking that we know when we do not know and that we know about one thing because we know about something else which seems to be similar to the thing we do not know about. I assume you know what I’m talking about. Ha, ha! (Only joking.) The use of words carries with it responsibilities, as I have written about on many previous occasions. As Muslims we are ordered to speak the truth at all times, and so it goes without saying that when we opine on a matter there should be some substance to it. Naturally we cannot expect others to write with the same ethics, but let us just grasp onto the hope that some of our conduct might rub off on others. The first step is ours.
- Casley and Lury (1988) Data Collection in Developing Countries
- Chambers (1981) Rapid Rural Appraisal: Rationale and Repertoire
- Chambers (1983) Rural Poverty Unobserved: The Six Biases
- Hammersley (1992) What’s wrong with Ethnography?
- Jackson (1983) “Principles and problems of participant observation”
- Mosse (1994) Authority, Gender and knowledge: Theoretical Reflections on the Practice of Participatory Rural Appraisal.
- Okali, Sumberg and Farrington (1994) Farmer Participatory Research: Rhetoric and Reality
- Pratt and Loizos (1992) Choosing research methods: Data Collection for Development Workers. Oxfam