Saturday, February 13, 2016

The Allure of the Dark Side: "Objectivity" and Science

1) Becker writes that one of the main differences between quantitative and qualitative methods is in how they collect and use data. Quantitative methods look more for key variables that explain or prove a concept, while quantitative methods collect highly detailed data in order to completely describe an event. This often leads to “thick” data sets that include even the smallest pieces of information. In your view, does more data clarify or cloud an objective?

2) In Becker’s The Epistemology of Qualitative Research, Herbert Blumer is cited as holding the belief that: all social scientists, implicitly or explicitly, attribute a point of view and interpretations to the people whose actions they analyze. What is Becker’s view on Blumer and how we should represent the viewpoint of those we analyze? Do you agree or disagree with his assertion?

3) According to Charmaz, what is the hallmark of grounded theory? What is the grounded researchers approach to their obtained data and how do they study meaning?

4) According to Katz’s “Ethnography’s Warrants,” what is a ‘normalizing’ view with regard to ethnographies warranted by deviant social reputations? Have you read any studies or seen any media in which a ‘normalizing’ view was expressed? Give an example.

5) Among the many shortcomings of the Failed States Index, the article also criticizes the FSI for its lack of practical utility and its inability to predict events like the Arab Spring. If the Failed States Index fails to accurately reflect the reality (just like other existing indexes like Democracy Index, Corruption-Perception Index, etc.) why do researchers often use these indicators in their studies? Do you think all quantitative analysis should be applicable in predicting future events?


6) Do you think that quantitative and ethnographic research have the same level of influence on policy? What are some real life examples of when quantitative or ethnographic research findings were used in policy discussions? 

15 comments:

  1. 1) Becker writes that one of the main differences between quantitative and qualitative methods is in how they collect and use data. Quantitative methods look more for key variables that explain or prove a concept, while quantitative methods collect highly detailed data in order to completely describe an event. This often leads to “thick” data sets that include even the smallest pieces of information. In your view, does more data clarify or cloud an objective?

    I agree with Becker on the point that a “thick” data set can be problematic, but disagree as to the reason why. Becker argues that a large data set is problematic because it forces the researcher to “become aware of things they had not anticipated which may have a bearing on their subject.” I argue that the experienced researcher will be able to distinguish among the data he/she has collected between relevant data points and those he/she can disregard. The problem with large data sets is the task with which they charge the researcher to then make these separations.

    I also disagree with Becker when he says that qualitative research is the only kind of research with the potential for this problem. While the data may look different when collected by quantitative means, it is just as easy for a quantitative researcher to end up with a “thick” set of data as a qualitative researcher. I feel it is unfair to attribute this risk only to information collected by ethnographic means as it is just as relevant to statistical data.


    2) In Becker’s The Epistemology of Qualitative Research, Herbert Blumer is cited as holding the belief that: all social scientists, implicitly or explicitly, attribute a point of view and interpretations to the people whose actions they analyze. What is Becker’s view on Blumer and how we should represent the viewpoint of those we analyze? Do you agree or disagree with his assertion?

    Becker has a problem with the risk that the ethnographer takes of replacing observation with speculation. He argues that because the subject of research is often unfamiliar to the ethnographic researcher, that researcher is forced to make assumptions and interpretations in order to connect the information he/she is unable to collect qualitatively. These assumptions and interpretations lead to the conversion of opinion to fact, resulting in an “error of attribution” often found in ethnographic research.

    Neorealist ethnographers would agree that social scientists bring their own frameworks and biases to the table whenever they perform qualitative research, and that the presence of these biases alters the outcomes of the research. This is why neorealist ethnographers aim to address issues of bias and researcher impact on the environment. Interpretive ethnographers would say agree that social scientists bring their own biases to the table whenever they perform qualitative research, but because they don’t believe in an objective reality, they would disagree with the presence of bias being a problem in research.

    As a neorealist, I do believe in Mind-world dualism, and striving for an objective reality in qualitative research, and will adjust the presentation of my research outcomes to account for my potential biases and assumptions.

    ReplyDelete
  2. 1. Excessive data clouds or clarifies an objective?

    According to Becker, qualitative data can get “thick” which means it contains such detailed descriptions of social life that they unintentionally allow themselves to “become aware of things they had not anticipated which may have a bearing on their subject.” (Becker, 320) While adequate, not excessive amount of data, does make it easier for the researcher to answer the intended questions, there is a possibility of missing out on some small detail that can go on to either change results or substantially effect the results of a study. Also, by just relying on data that we gathered to answer a specific question, we might lose our ability to make sense of the world by careful observation. There is something about observing and collecting data on each and every aspect of the contexts in which people behave the way they do and while it may lead to excessive and sometimes unnecessary details, other times it could just bring about a detail that answers many more questions. This may be a poor analogy but think of what we miss out when we navigating a new city with the help of a GPS when in fact, we can easily enjoy the feel of the city much more without it. While GPS gets us where we want to faster and more conveniently, it also takes away the pleasure of exploring the streets and by-lanes of a new city. Apparently Lego, too, collected hours and hours of video footage, pictures and journal entries of the play experience of children across the world to form a winning pattern of what the children actually want. In fact, to me, there is nothing like excessive data. Every minute observation of the social life of an individual answers a host of questions considering how different he/she is compared to the other.

    2. What is Becker’s view on Blumer? Do you agree or disagree with his assertion?

    According to Herbert Blumer, all social scientists, implicitly or explicitly, attribute a point of view and interpretations to the people whose actions we analyze. (Becker, 321). Becker thinks this does happen but what is important is not whether it should but how it happens which makes it all the more problematic. Becker thinks that if we don’t find out from people first hand why they give certain meanings to things, we will tend to invent data and make assumptions about a certain behavior based on guesswork. He says, these guesses will inevitably be erroneous because what “looks reasonable to us will not be what looked reasonable to them” (Becker, 322). He called these findings “concocted out of a kind of willful ignorance.” To counter this, Becker thinks researchers should adopt the mindset of the actors and understand why they give certain meaning to things. Instead of inventing their viewpoints, researchers should “attribute to actors ideas about the world they actually hold.” I completely agree with Becker’s assertion. Every researcher will inevitably have a frame of reference regarding the world and how things function. This frame of reference is difficult to alter, because of which there are chances of misinterpretation of an actor’s behavior as well as projection of his own ideas on the actor. Researchers could possibly confuse the actor’s point of view with their own considering it as “the normal reaction” to an event/occurrence or probably in line with the researcher’s expectation that the actor’s actions fit the researcher’s prescribed model. This is very risky and can falsify all the data collected. Apart from the fact that researchers need to internalize the frame of reference of the actor, they also need to not have any expectations of how an actor will respond. The researcher should clearly ask the actor why he/she responded in a certain way, observe his/her behavior to see how consistent or inconsistent they are with their reactions to events and like Becker says be “as undecided as the actors we study.”

    ReplyDelete
    Replies
    1. 1. In Becker’s the Epistemology of Qualitative Research, Herbert Blumer is cited as holding the belief that: all social scientists, implicitly or explicitly, attribute a point of view and interpretations to the people whose actions they analyze. What is Becker’s view on Blumer and how we should represent the view point of those we analyze?
      According to Becker, all social scientists have a tendency to invent meaning whenever they fail to find the meanings of the things given by the people they study. This kind of argument seem to suggest the absence of ethics in ethnographic research which is not the case. One of the reason ethnographers immerse themselves in the culture of the society they study is because they are able to explain human actions effectively when they gain an understanding of the societies’ perspectives which differs from “mechanical” causality typical of physical phenomena. It is known among ethnographers that one cannot assume that we already know others’ perspectives or meaning to things for particular groups and individuals develop distinctive worldviews. This is especially true in diverse and mixed societies. Of course, ethnographic research takes place among real human beings and there are a number of ethical concerns to be aware of and such epistemological errors have happened and may also happen in the future. This, however, doesn’t make it open for such kind of generalization. Hammersley argues that it is necessary to learnt the culture of the group one is studying before one can produce valid explanations for the behavior of its members. What is more, though all social scientist should strive for accuracy, it should be noted that interpretation involves attaching meaning and significance to the analysis, explaining descriptive patterns, looking for relationships and linkages among descriptive dimensions, which makes ethnography not far removed from the sort of approach that we all use in our everyday life to make sense of our surroundings.

      2. Among the many shortcomings of the Failed States Index, the article also criticizes the FSI for its lack of practical utility and its inability to predict events like the Arab Spring. If the Failed States Index fails to accurately reflect the reality (just like other existing indexes like Democracy Index, Corruption-Perception Index, etc.) why do researchers often use these indicators in their studies? Do you think all quantitative analysis should be applicable in predicting future events?
      Despite the criticism the Failed State Index receives regarding its underlying assumptions about economic underdevelopment resulting in vulnerability; not being able to accommodate other essential indicators like the Human Development Index, it still provides a measure of assessment that tries to address the issues that causes threats both domestically and internationally. As a result, researcher still find it useful to apply its indicators in their studies. Besides, we cannot abolish such indexes just because they didn’t predict events like the Arab Spring accurately. Such incidents are subject to other variables that may influence subsequent variables leading to a different end result. Besides, the very notion of state in the international arena is a fluid concept let alone analyzing failed states and prediction of such facts. It is difficult to predict accurately when the action is taken by human beings not predictable chemical elements, which makes developing the perfect matrix that predicts social movements very difficult. Such analysis requires not only the use of quantitative analysis but also qualitative analysis. It is not, however, necessary to use all quantitative analysis in predicting future events. Both quantitative and qualitative analysis show some irregularities in predicting future events as a result of the process of evaluation involved, the approaches used depending on the variable and the continuity or patterns of variables that may be diverse.

      Delete

  3. 1) I do not think “thick”, highly-detailed data sets cloud an objective. Instead, having a lot of this data might aid the researcher in clarifying their objective by allowing them to make linkages they might now have otherwise made as they sort through the data. The goal throughout is exploration and thoroughness when investigating a social phenomenon. This is ethnography’s epistemology according to Becker, and much like other social scientists’ it insists on investigating the view point of those studied however just more rigorously and complete.
    Without considering the time constraints a researcher is confronted with when conducting studies, a collection of highly-detailed data benefits social scientists of differing interests. As Becker writes, for example, some social scientists are interested in very general descriptions in the form of laws about whole classes of phenomena while others are more interested in understanding specific cases and more on how those general statements are worked out in this case. A large-detailed pool of data facilitates both of these social scientists in achieving their objectives by possibly leading to more accurate descriptions of a phenomenon or a better understanding of specific cases. There is less room for the social scientist’s interpretation of events and observations and mitigates the risk of giving false meaning to what an actor attributes meaning to. Although I think interpretation is inevitable and not necessarily a hindrance to research, referencing Becker, providing a dense detailed description of social life allows one to talk with more assurance about things than if we have to make them up. For him, the observation which requires less inference and assumptions is more likely to be accurate. If we don’t find out from people what meanings they are giving to things, we will still discuss those meanings and thus, out of necessity, invent them. The danger here is that the guess will most likely be wrong, and what appears reasonable to one may not be reasonable for someone else. Point being, “thick” data sets can lead to a better understanding of the conditions a participant attributes meanings to objects and events, which leads to more accurate descriptions of what those meanings are likely to be.
    4) Katz’s writes that ethnographies that are warranted by the deviant social reputations of their subjects may undermine or promote a sense of social distance. Bohemian studies, those studies that view the moral fabric of the subjects’ lives as more deviant than conventional opinion imagined, tend to be “normalizing” since they document those subjects as being like “us” but living in troubled circumstances with which we do not need to struggle with. The ethnographer might be tempted to remove embarrassing content or content from their material that reinforces negative stereotypes of their subjects because they realize it might not be so easy to present field notes that “will warrant the study as simply a demonstration that outsiders have failed to recognize the extent of which the subjects are abused by inaccurate stereotypes”. The ethnographer then makes up for this omission by rhetorical argumentation of repression elsewhere in the social system that explains deviant behavior. It can be argued that this normalizing view is prominent within literature on “jihadism” and Islamic extremism, which the scholar cites the history and conditions that may have led to this form of violence, and soundly so. They omit any tenants of violence within Islam, or might not even reference the religion at all in certain media outlets, in order to espouse it as a religion of peace. I think the point of the objective of this is to detract from the negative stereotypes that affect Muslims while attempting to explain a phenomenon. Personally, I think any religion could be used as a tool to promote any agenda, whether it is peaceful or violent one.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. 3) Kathy Charmaz define grounded theory as a logically consistent set of data collection and analytic procedures aimed to develop theory. (p.335). With that said, according to her, the hallmark of grounded theory consists of the researcher deriving his or her analytic categories directly from the data, NOT from preconceived concepts or hypothesis. (p. 337). In other words, the researcher must construct the data in a way that resembles with the people he or she is researching. The researchers don’t force preconceived ideas or theories directly upon their data, instead the key is to allow issues to emerge on their own. The collection of rich and detailed data is stressed here by Charmaz as it gives the researcher thorough knowledge of the empirical world or problem that they are working with by tracing events, delineating processes and making comparisons.(p. 338). Grounded Theorists understand their research with multiple layers of meaning; The first way to find meaning can readily be discovered in the research setting(Glaser) whilst Charmaz assumes that the interaction between the researcher and the researched produces the data and therefore the meanings that the researcher observes and defines. (p. 339).

    5) I believe that researchers use the FSI because it can be used as a policy tool to improve governing and NGOs as it takes into account data from digitized news articles, magazines, speeches, essays, government and non-government reports such as blogs and other social media as well as quantitative data from reliable institutions such as WHO, World Factbook, UNDP, UNHCR, the World Bank, and reputable sources. With twelve indicators of a countries stability and vulnerability split into three subcategories; social indicators, economic indicators, and political and military indicators. With that said, it is not surprising that the FSI is an alluring source to look at as a researcher even as it has many shortcomings and could use improvements as Boehner and Young point out.

    ReplyDelete
  7. 1) If the smallest pieces of information is used it can cloud the objective. I think it is better to use a sufficient amount of relevant information to clarify an objective. If the pertinent information is used, not matter how small, it would provide for support for the objective. However, if large pieces of information that are not relevant to the objective are included, I believe the data could potentially cloud the objective. I think it is important for the researcher to gather a sufficient amount of data in order to meet his/her objective


    5) These studies are used by researchers because they are a way to categorize countries, no matter how flawed the method. The FSI is a way to put countries into a box. The article made some very good points, especially with the Iran example – the country is rated poorly, yet it’s not at risk of failing or falling. I also believe the indicators are used to develop policy and other governing structures. These indexes show where a country is weak in a particular area. Therefore, I don’t think the index score tells the whole story. Researchers should focus on what’s helping or hurting the score to determine where to focus analysis.

    I think the indicators selected for the index/ranking are valid, it’s the applicability of those indicators that are flawed. There should be some type of scaling or prioritization used to show an accurate picture of a country. As Becker noted, not all indicators should have equal weight.

    I don’t think all quantitative analysis should be applicable for predicting future events. However, I believe quantitative data should be used to show trends and put things in a social context. By doing this, researchers would be able to identify patterns and make connections. This would be helpful in providing indicators of future events.

    ReplyDelete


  8. #5 Among the many shortcomings of the Failed States Index, the article also criticizes the FSI for its lack of practical utility and its inability to predict events like the Arab Spring. If the Failed States Index fails to accurately reflect the reality (just like other existing indexes like Democracy Index, Corruption-Perception Index, etc.) why do researchers often use these indicators in their studies? Do you think all quantitative analysis should be applicable in predicting future events?

    The Failed State, Democracy, and Global Peace Indexes are over-simplifications of reality. Though I argue they’re a necessary moral hazard, they are not immune to criticism. And, thus should work towards a more reactive and fluid response to these criticisms to improve their methodology.

    The Failed State Index in particular claims a three step process of methodological rigor. First, they download approximately 42 million documents from an excess of 100,000 English language and translated sources to be analyzed by their CAST framework against Boolean phrases and expressed indicators. Second, they incorporate time series data from international organizations (e.g. World Bank, Freedom House, UNHCR, WHO, UNDP, etc.). Third, the results are reviewed in comparison to qualitative insights in each country. While FSI claims to uphold the highest standard of rigor, and their three step methodological process appears appropriate, the output in the form of the annual report boasts less-than-relevant data. When the entirety of Africa appears “fire engine red,” the questions of both applicability and myopia emerge.

    Perhaps it would be better to rely upon indexes using a few, very specific indicators to analyze one phenomenon instead of these thick data sets proving nearly impenetrable to critique under the veil of statistical software (e.g. CAST). One alternative is to steer towards expert opinion based model as seen in the field of atrocity prevention — specifically the United States Holocaust Memorial Museum’s Early Warning Project and Council on Foreign Relations’ Preventive Priorities Survey. Both of which consult experts in the field of atrocity prevention, international affairs, and policy creators on the potential for conflict escalation within the coming year thus highlighting high-risk countries.

    #3 According to Charmaz, what is the hallmark of grounded theory? What is the grounded researchers approach to their obtained data and how do they study meaning?

    Grounded theory means remaining open to theoretical understandings of the data and redefining major categories as is fit based upon initial findings in the field; however, the hallmark is deriving these categories directly from what the data is illustrating and not as an attempt to confirm initial hypotheses. Basically she advocates to remain close to the data and creating a theory through the analysis of this data. The first step is data collection, then coding of field notes which are then grouped into concepts, and thus categories for the basis of a theory. Meaning is derived from the interpretation of reality, which is arrived upon through analyzing the data gained from observation; the researcher and the participant or phenomenon define the meaning.

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. 2) Becker agrees with Blumer’s statement that social scientists interpret the actions of their subjects. Becker goes on to say that we should use the information that a subject tells and provides us to understand their meaning. While I agree that social scientists do interpret and attribute a point of view to data (i.e. the subject’s actions), to me, that is role of science.
    In reading this chapter, and specifically the section on “The Actor’s Point of View,” it seemed like Blumer and Becker suggest a literalists understanding of what people said; they advocate for using the meaning that people tell them through interviews and other conversations. That, to me, seems more like journalism than social science. I think there are often forces, meanings and social norms that are difficult for insiders to understand and articulate—a sort of forest through the trees analogy.

    For example, the surprising popular support for Donald Trump and growing right-wing movements have been interpreted by many social scientists as fear and backlash from changing economic structures and a sense of loss of status in new political landscapes. Perhaps this is my own university-educated and privileged biases, I do not think many Trump supporters would be able to articulate the underlying motivations for supporting Trump. Many of them say he will protect their freedom or that he doesn’t care about politically correct. These are the surface-level meanings given to explain support for Trump.

    I think it is important to collect and report on the meaning that people give for their action, but that should be the notes or “raw data.” The job of the researcher is to find and analyze trends in those data points. This act of looking for trends is where bias and interpretation happen, but it is also where journalism and social science diverge. Social scientists must go beyond the often-surface level meaning that people assign to their actions.

    4) The first thing I thought of as I was reading Katz’s chapter was “Vice News.” Many of Vice’s human interest stories focus on people or groups that Katz would define as “deviant.” These often include extremists, drug lords, rebels and others that do not conform to the mainly Western, mainstream and privileged cultural norms. Katz notes the “middle class university-educated” population determines who or what is considered socially deviant.

    While Vice reporters are clearly not ethnographers, they provide a glimpse into the essence of people that are often far removed from the audience. In doing so, Vice reports helps normalize and demystify their subjects. Katz defined normalizing as the process by which a person or population is documented to seem like “’us’” but under another situation or circumstance. I recently watched an interview with Martin Shkreli, the man who bought a pharmaceutical company and then dramatically increased the price of the drug. His business practice was considered predatory and to a certain extent deviant. Mr. Shkreli was heavily criticized and ostracized by the media and many people. He was thought of as a sociopath. I use this example to illustrate how even someone who conforms to many of the stereotypes of white, middle-to-upper class, university-educated and privileged may still be considered deviant because of their behavior. The public has a genuine curiosity as to why someone behaves in such a manner. At different points in the interview, Mr. Shkreli seems more relatable, and at other points sad and lonely.

    At the end of many of the reports and ethnographies of “deviant social reputation,” the reader may not agree with the action or behavior taken by the subject, but they can see a more human-side to the subject.

    ReplyDelete
  11. Question 1
    Thick data sets and collection is an essential part of an ethnographer’s research. The reason for this is that ethnographers have certain themes they would like to study, and certain broad questions they might ask about the subject of their inquiry. As a result, they might choose to immerse themselves in the social environment of what they are studying. For example, a researcher might immerse him or herself in a small village or tribe in the Amazons to inquire about the nature of political authority in this village, or under what social context and norms it is utilized, or what relational forms or appearances does it take. In this research, the task ethnographers try to achieve requires them to observe this political authority(s), in every social setting as they possibly can. A political authority might represent itself in the form of elders punishing someone, or a father disciplining his son or the whole village banishing someone away as an undesirable. As such, ethnographers try to observe these phenomena as they unfold and try to understand the significance or meanings ascribed to these events by people who are actively involved in the events, or by uninvolved third parties as well. Gathering as much information as possible in this context is crucial since later on, we might find significant details. Gathering thick data sets is also important in the sense that while we are observing an event or participating in it, we might miss certain aspects or angles about the event, because we might be concerned with another aspect at that time. Thick data sets would allow us to ‘rediscover’ this afterwards. We must perhaps lastly make the distinction that Becker makes regarding thickness versus breadth as implying that the goal should be “trying to find out about every topic the research touches on, even tangentially” (p.325). This will surely help the researcher better understand the topic of research as it grands a fuller picture.

    Question 5
    There are various reasons why researchers might use these indicators in their studies, even though they might have many serious structural shortcomings. One reason might be that it adds a certain level of legitimacy to their research and the results they present when these indicators are used. These indicators are usually popular tools many researchers, journalists, or even state officials and politicians use in their work. Therefore, when they are used in a research, people might recognize the indicator, and because of its popularity generated by its utilization by various spheres of influence, it might grant the research a certain of level of legitimacy and perceived accuracy and validity. Another reason might be that the indicators are usually presented as being the result of rigorous quantitative researches done over a number of years. Even though, Boehner and Young point out many shortcomings of the methods used and calculations made, unless the indicator is critically studied to draw its shortcomings, at a first glance, it might seem as having a solid scientific appearance to it, which builds up the legitimacy of the research. Another reason these indicators might be used is because of the overall simplicity of how they present the data. Yes, it might be an oversimplification at most times and generalizations that might not hold true for a significant number of cases studied. Yet, this simplicity might draw researchers to use them anyway. One last reason might be that these indicators, in the specific way they choose to use certain words, also help serve a political objective. When used by politicians and journalists to make a point, they become effective ideological tools to shape or guide public perceptions in deliberate ways to certain intended or desired outcomes. Quantitative analysis under the application and guidance of a theory might be applicable in predicting future events and theories can be strengthened in various ways. As a result, by forming strong and reliable theories, and using quantitative analysis, one can try to predict future events.

    ReplyDelete
  12. 1) Becker writes that one of the main differences between quantitative and qualitative methods is in how they collect and use data. Quantitative methods look more for key variables that explain or prove a concept, while quantitative methods collect highly detailed data in order to completely describe an event. This often leads to “thick” data sets that include even the smallest pieces of information. In your view, does more data clarify or cloud an objective?

    I think more data can help to clarify an objective. Becker argues that a thick data set can be problematic because it essentially brings awareness to certain issues that: 1) the researcher didn’t anticipate and 2) may have no bearing on the subject. Personally, a thick data set, especially with respect to qualitative research, brings about nuances and social cues that aren’t readily recognizable through sheer observation. As noted my Emerson in the Writing Fieldnotes and Ethnographic Practice, the “ process of inscribing, of writing fieldnotes, helps the researcher to understand what he has been observing in the first place and, thus, enables him to participate in new ways, to hear with greater acuteness, and to observe with a new lens”. So overtime, development of data, with descriptive notes and additions is bound to become “thick” but this thickness, can be interpreted as richness and really something that will bring more depth to ones research through a greater understanding of what is being observed. A well-developed researcher will be able to distinguish between data that adds to their study or detracts; however, determining what adds to a study is subjective. Depending on your original question and outcome, it is important to have data set that is inclusive of every influence on the observed outcome.



    5) Among the many shortcomings of the Failed States Index, the article also criticizes the FSI for its lack of practical utility and its inability to predict events like the Arab Spring. If the Failed States Index fails to accurately reflect the reality (just like other existing indexes like Democracy Index, Corruption-Perception Index, etc.) why do researchers often use these indicators in their studies? Do you think all quantitative analysis should be applicable in predicting future events?


    The indicators that FSI uses are relevant in determining the stability of the state; however, weighting these indicators would simply add more validity. The FSI haphazardly puts together data without considering which components may be more important or hold more validity for measuring fragility or stability. By adding some sort of weight to each indicator and accounting for spatial and temporal variability, these indicators could live up to their original intentions. I think researchers still use these indicators in their studies because: they’re readily available and updated annually; they give a good basis to factors that could influence stability; and they still hold relevance in the research world. Even though these indicators have bee scrutinized around their validity and actual utility in predicting future events, they’re still widely accepted as valid indicators. I think they can still be used in social science research with the caveat that they may not be the most accurate prediction of future events to come. I think quantitative analysis can be applicable in predicting future events but not all; every event is a result of an initial situation (represented by indicators) but other factors are going to influence future events. SO I don’t think a purely mathematical equation can predict future events but giving some contextual data along with the quantitative could be beneficial….but then again, no one can truly predict the future, regardless of quantitative of qualitative data…but we can make generalized assumptions.

    ReplyDelete
  13. 1. In Becker’s the Epistemology of Qualitative Research, Herbert Blumer is cited as holding the belief that: all social scientists, implicitly or explicitly, attribute a point of view and interpretations to the people whose actions they analyze. What is Becker’s view on Blumer and how we should represent the view point of those we analyze?
    According to Becker, all social scientists have a tendency to invent meaning whenever they fail to find the meanings of the things given by the people they study. This kind of argument seem to suggest the absence of ethics in ethnographic research which is not the case. One of the reason ethnographers immerse themselves in the culture of the society they study is because they are able to explain human actions effectively when they gain an understanding of the societies’ perspectives which differs from “mechanical” causality typical of physical phenomena. It is known among ethnographers that one cannot assume that we already know others’ perspectives or meaning to things for particular groups and individuals develop distinctive worldviews. This is especially true in diverse and mixed societies. Of course, ethnographic research takes place among real human beings and there are a number of ethical concerns to be aware of and such epistemological errors have happened and may also happen in the future. This, however, doesn’t make it open for such kind of generalization. Hammersley argues that it is necessary to learnt the culture of the group one is studying before one can produce valid explanations for the behavior of its members. What is more, though all social scientist should strive for accuracy, it should be noted that interpretation involves attaching meaning and significance to the analysis, explaining descriptive patterns, looking for relationships and linkages among descriptive dimensions, which makes ethnography not far removed from the sort of approach that we all use in our everyday life to make sense of our surroundings.

    2. Among the many shortcomings of the Failed States Index, the article also criticizes the FSI for its lack of practical utility and its inability to predict events like the Arab Spring. If the Failed States Index fails to accurately reflect the reality (just like other existing indexes like Democracy Index, Corruption-Perception Index, etc.) why do researchers often use these indicators in their studies? Do you think all quantitative analysis should be applicable in predicting future events?
    Despite the criticism the Failed State Index receives regarding its underlying assumptions about economic underdevelopment resulting in vulnerability; not being able to accommodate other essential indicators like the Human Development Index, it still provides a measure of assessment that tries to address the issues that causes threats both domestically and internationally. As a result, researcher still find it useful to apply its indicators in their studies. Besides, we cannot abolish such indexes just because they didn’t predict events like the Arab Spring accurately. Such incidents are subject to other variables that may influence subsequent variables leading to a different end result. Besides, the very notion of state in the international arena is a fluid concept let alone analyzing failed states and prediction of such facts. It is difficult to predict accurately when the action is taken by human beings not predictable chemical elements, which makes developing the perfect matrix that predicts social movements very difficult. Such analysis requires not only the use of quantitative analysis but also qualitative analysis. It is not, however, necessary to use all quantitative analysis in predicting future events. Both quantitative and qualitative analysis show some irregularities in predicting future events as a result of the process of evaluation involved, the approaches used depending on the variable and the continuity or patterns of variables that may be diverse.

    ReplyDelete

  14. 2) Becker essentially agrees with Blumer’s statement, and elaborates upon it in his writings. Becker says that we can take on the point of view of those we study, “…not with perfect accuracy, but better than zero…”, and that our goal should be to get as near as we can to having our subjects actually tell us how they attribute meanings to objects and events (321). We can guess at how our subjects attribute meaning, but we are always in more danger of erroring if we do so. Because people doing ethnographic research have to verify speculations, Becket says that they are even more rigorous than survey methodologies that refrain from making such speculations. He gives the example of Latour’s study on scientists to explicate this. While many social scientists attribute a special status to the knowledge of scientists, Latour that finds that scientists actually are rather less certain of the work they do. Latour follows in the vein of methodical anthropologists that immerse themselves with the subjects of study to better understand their point of view. In this way we can attribute a worldview to them that they actually hold. He argues that, if a scientists is uncertain about an observation or conclusion, we should allow that uncertainty to stand rather than romanticizing the process.
    I agree with the arguments of Becker. His line of though parallels that of feminist and post-structuralist approaches to epistemology that attract me. Indeed, I know that I have read, though I cannot remember where, that epistemology as field in philosophy has be replaced by philosophy of science in the analytic tradition, and a situated understanding of epistemology in the post-structuralist continental tradition. Similarly, a lot of my current work focuses on traditional ecological and agricultural knowledge and the importance of taking seriously the way in which groups see their practices. I follow the likes of Fikret Berkes who argues that such knowledge is inseparable from the cosmology, place and community in which the knowledge is imbedded. Trying to extract traditional knowledge from its context not only dilutes it wisdom, but also reduces its applicability.
    3) Charmaz builds on the famous grounded theory as originally laid out by Glasser and Strauss. Grounded theory is an ethnographic approach to building theory in the field from data as it is collected. It challenges the assumption that qualitative research can only provide description and not theory.
    Grounded theory is an extremely methodical approach to research that requires constant self-analysis and reworking of the original research goals. An ethnographer looking to produce grounded theory is constantly analyzing their data as it is collected. Charmaz describes several different ways that this can be done, but places focus on the importance of both coding and memo writing. Disciplined coding allows a researcher to start abstracting general concepts from raw data that then guide further field work or even rework the original question. Systematic memo writing also provides as another disciplining tool that prevents researchers from getting overwhelmed by data through a process of writing up explanations of the concepts that they are developing from the data. The final theoretical implications are drawn at the end of the story when the researcher goes back to literature review, fits in the new concepts and data they have gathered, and draws conclusions. Charmaz argues for this process because he believes that meaning is not simply found in subjects that are studied, but data is created from the interactions between researchers and their subjects and meaning is drawn from that data. For this reason he argues that a researcher should be the one going through the data collection process rather than placing the responsibility on someone else, so that that the researcher can feel how the data is interacting meaning and shape further data collection and coding as necessary.

    ReplyDelete