Ethics – Reflections Post #8

Ethics.  Ethics in qualitative research.  What can one say?  Since everyone’s ideas as to what is and is not ethical can differ – and perhaps more than a little bit – when considering “ethics” in the context of a particular profession or area (i.e. qualitative research), I strongly believe there should be some uniform guidance (dare I say, “directive”) and specified “norms” as to “appropriate” conduct for those within that particular profession or area.  We probably all agree on that.

The problem, however, is what happens when someone in that area or professions behaves in a way inconsistent with those guidelines.   In this regard, I think one of the most interesting things that came out of the discussion in class on this subject involved our various “takes” on how we would handle a particular ethical dilemma.  It was pointed out that there are layers of impact and considerations involved – that is, our determination as to how we would handle the dilemma may be fine from one level of impact (individually, for example, it might be fine for me to blow the proverbial whistle on an ethical breach on the part of a colleague) but cause significant problems – and change the analysis involved at arriving at a decision – at another level of impact (i.e. blowing the whistle on the colleague may negatively impact my entire institution or even my entire profession).   I’m suggesting here that even if there are established ‘norms’ of ethical conduct or behavior, the analysis of how to handle a breach of those norms is far from simple – and simply having “established” ethical norms may not be sufficient to handle these situations.   In this thinking, I am actually reflecting on my experience in law school (I know, I know… lawyers, ethics…).  We were required to complete a course in “ethics” – which, in point of fact, dealt primarily with personal conduct as opposed to professional dealings.  But, we were also required to take a course in Professional Responsibility – which is different.  PR dealt with what a licensed attorney’s obligations are when s/he discovers breaches in professional conduct – including, and particularly, ethical breaches (i.e. when you must decline to represent a client for conflict of interest…mishandling of client funds)- and when you are actually required (as one of the profs so eloquently put it) to “rat out” a colleague.  In essence, the MRPC (Model Rules of Professional Conduct) provide the type of guidance and structure I was alluding to above (and if there isn’t a rule on point to your specific situation, then there is governing body to whom you can apply for direction).   They impose not only sanction on the putative bad actor, but also provide sanction for those who don’t act in accordance with the Rules’ requirements when discovering a bad actor.  In short – the internal debate we might have had concerning the unethical researcher would be considerably shortened were it instead to involve an attorney not behaving in accordance with the MRPC.   Indeed, every licensed attorney must pass the MPRE (Multistate Professional Responsibility Exam) BEFORE they can even sit for the Bar (licensing) Exam itself.  Attorneys found not to have acted in accordance with the MRPC can be disbarred (i.e. their license to practice law is revoked).   Did I mention this is a bad thing??  As a matter of fact, there was a certain US President about 40 years ago who was involved in some questionable conduct around his reelection – and while he was never in fact charged (nonetheless receiving a “preemptive” pardon from his successor to preclude possible indictment) for the criminal conduct of which he was accused, he was a licensed attorney in NY and the NY Bar investigated his actions and ultimately disbarred him for violating these professional “ethics” or “conduct” canons with his actions. That was effectively the only punishment he received (other than, of course, a certain amount of ongoing public censure …) (lesson there:  don’t mess with the State Bar!!)

Mind, I do not necessarily subscribe to any concept of “deterrence”:  I do not for a moment believe that having these rules in place will stop unethical behavior – on the part of attorneys, or of researchers for that matter.  Still, however, I do believe that having “codified” professional ethics or responsibility rules provides the members of the particular profession  1) a common framework within which to analyze their own behavior and that of their colleagues in their professional dealings that is informed by the needs/ requirements/ special considerations of their particular profession; 2) guidance as to their specific obligations in the event they discover a breach by a colleague – essentially taking the decision from their hands for the most part, and thereby avoiding the type of quandary we discussed in class (again – for the most part); and (ideally) 3) a body monitoring compliance to which concerns can be raised for uniform review and advice.  In addition, I feel that even if it doesn’t in fact deter “bad actors” (or, rather, eliminate unethical behavior), this (or indeed any legal schema) provides at least a sense that they will themselves suffer negative consequences for their bad (read, ‘unethical’) acts, and that there is some recognition that the “harm” done is not only to the individual(s) involved but to the broader group (whether to fellow attorneys and their firms, or researchers and their institutions)

Off my soap-box now. ;>)  As regards IS and research, I am sure that, for example, IRBs are generally intended to provide this function – to a degree.  However, my sense at this point in the proceedings (and I could be wrong…) is that there is not a great deal of uniformity between and among IRBs, and that different “rules” apply depending on the specific institutions or foundations through which your research is being conducted.  While I understand and appreciate the sentiment behind the expression “consistency is the hobgoblin of small minds…”(or whatever….)  I do believe that consistency is hugely important when attempting to regulate behaviors and conduct within groups (particularly in the application and implementation/enforcement of the regulations/ laws/ rules).  Hence, to the extent my sense is correct concerning lack of consistency across research institutions etc. in terms of ethics or professional conduct/ responsibility requirements – I would like to see that change in the future.

Content and Discourse Analysis – Reflection Post #7

Once again, I am at a bit of a disadvantage in that I missed class and hence the related exercise.  Nonetheless, I will attempt to offer a few thoughts concerning content and discourse analysis and how they might be applicable in my continued adventures in IS…

Based on the work I have been doing this semester in the iSensor Lab concerning linguistic analysis and language-action cues, I firmly believe that “information” and cues that are available in a F2F setting (such as, for example, an in-person interview) are in large part lacking when we have only the raw “text” itself.  Just as an example, the appearance, body language and even tone of voice of the speaker are lost (at least for the most part  – the cool thing about linguistic analysis is that it attempts to supply some of these cues from within the text itself by way of examining word choice, phraseology, length of communication, wordiness (ahem…) and syntax … but I digress).  That is to say, the information gained in F2F interactions includes both verbal and non-verbal components, and it is difficult at best to “replace” the information – especially the non-verbal information – that is missing when only the “written” evidence of the interaction is available (even if originally in a F2F environment).   So, how much can we really extract from a writing?  It seems it would depend upon what we are attempting to look at in reviewing the writing.  The example of being able to observe court testimony “live”, or even watch a video recording of it, versus reading the transcript comes to mind.   In reading the transcript, we can read the (supposedly) exact words of the parties – But, unless the judge or counsel ask for a specific notation to be made in the transcript concerning the facial expressions or gestures of the witness being interviewed, these are lost when only the text remains.  Inflection and tone are also lost.  For us Rumpole of the Bailey fans, this is specifically dealt with in “Rumpole and the Show Folk, where Rumpole gives several different “readings” of the same “line” from his client’s (an actress) statement concerning the shooting of her husband.  As you can probably imagine, her words themselves – “I shot him.  What could I do with him.  Help me.” – were quite incriminating, on the page (you will note, there isn’t even any punctuation to supply tone or emphasis).    However, with his customary style, Rumpole deftly illustrates that those words could be imbued with considerably different meaning(s), and considerably different information (or data?) is imparted, depending on tone and emphasis.   Same applies to the “Delicious Death” quotation in this week’s lecture notes – When Miss Murgatroyd tries to tell Miss Hinchcliffe what she saw the night of the shooting at Little Paddocks, meaning (or at least interpretation) follows inflection/ emphasis:  She wasn’t there, she wasn’t there, she wasn’t there.  All this to say, if the objective is just to verify “what” was said (or written…) – without interpretation or looking for meaning – then content analysis is great.  It’s a fairly straightforward, unobtrusive means of collecting data and deriving inferences from writings through the process of coding various aspects of the text.  Taking a holistic view of the writing (or body of writings) being explored can even provide a certain degree of context as well.  If, on the other hand, the intent is to interpret or derive “meaning” in an objective sense, it’s a bit problematic.  Bottom line, I think, is that it is important to keep in mind the kinds of “information” that get lost just in looking at text or writing.  Providing these are not what you are looking for, then CA is fine.

Discourse analysis is a bit more challenging – but it does seem to get at some of the linguistic analysis I mention above, which is of interest to me.  As I understand it, it seeks to take a very holistic view of the communications (i.e. writings) in question – and considers both the internal context of the writings (i.e. the context or sense in which certain words or phrases were used) but also the external context of the writing (i.e. the historical/ socio-political factors shaping who was writing, what they wrote, and why).  Not sure this is a good example, but I have in mind how the socio-political circumstances in 17th century England (i.e. English Civil War) informed the writings of Hobbes and Locke – and influenced not only what they wrote about but how the approached/ treated it.  Just as meaning and information can be lost in the absence of the facial expressions, body language etc. available in F2F interactions, and having only the written account of it to go on , I firmly believe also that a good deal of meaning (and information) can be lost if we are not mindful of the broader “context” of the communication in this sense.  I guess in this sense, I see CA and DA as potentially being complimentary to each other- particularly depending upon the topic of investigation – with CA looking at the “micro” level of the writing and DA looking at it from a “macro” level.

Coding in Grounded Theory – Reflection Post #6

Although I missed class, and hence the in-class activity, I will nonetheless offer up a few general thoughts concerning this topic.  I very much appreciated the way in which Strauss & Corbin (1994) described it: a general methodological approach that “explicitly involves ‘generating theory and doing social research [as] two parts of the same process…,'”  and its emphasis on what I believe would fairly be called “organic” theory development wherein the data drives the development and conceptualization of the theory (i.e. a more inductive rather than deductive approach).  As someone who has been firmly entrenched in the Socratic method of argument and reasoning and trained to apply deductive approaches to reasoning, the inductive approach of Grounded Theory is clearly a completely different way than I am used to in terms of conceptualizing, analyzing and discoursing on questions and topics of interest.  I can also understand why Charmaz (2006a) contends (in “Invitation to Grounded Theory”) that the grounded theory approach and process to doing research will “..bring surprises, spark ideas and hone your analytical skills” (p.2), and “foster seeing your data in fresh ways and exploring your ideas about the data…” because data are collected from the beginning of the project with an eye towards theoretical analysis and development.  The researcher is thereby constantly forced to evaluate new data in light of existing data, and the synthesis (or, as Charmaz puts it ‘sense making’) of these data drives new constructs which in turn raises additional questions of interest concerning the phenomenon being studied, and leads again to the collection of new data to be incorporated into the developing construct.  For myself, while I will probably continue to reflexively revert to my deductive roots, I will nonetheless be mindful not only that deductive approaches are not the only approaches but that, particularly in terms of brainstorming and “thinking outside the box” about a problem, there is much to be said for taking an inductive approach – and particularly the approach outlined in Grounded Theory.

With all that said about grounded theory as a general research/ theory development approach, I will now offer a few thoughts specifically concerning coding as a data analysis tool vis-a-vis grounded theory.  Given the inherently iterative nature of grounded theory, I can certainly see where coding of data could be a particular challenge!  Since the data are, essentially, a moving target (constantly being updated and reinterpreted) – even at the time of “initial coding” –  assigning codes to each piece of datum and putting them into nice neat buckets or categories based on these codes (i.e. “focused coding”) would presumably be a constant exercise – whether because reevaluation of the data collectively leads to the conclusion that the original code no longer “fits”, or that a piece of datum doesn’t belong where it was initially “placed”, or because the original buckets themselves no longer “work”.  This, of course, recalls what Charmaz (2006a) indicated concerning the importance of “flexibility” in grounded theory in general – so it is rather difficult to imagine how any other analytical approach would work within the framework of grounded theory.  Indeed, as Charmaz (2006b) also says (in “Coding in Grounded Theory Practice”) – “We play with the ideas we gain from the data…Coding gives us a focused way of viewing (it)” and “Through coding we make discoveries and gain a deeper understanding of the empirical world” (p. 71).  Moreover, coding “gives us a preliminary set of ideas that we can explore and examine analytically… (and) if we wish, we can return to the data and make a fresh coding” (or, implicitly, revise the existing).  In short, coding in grounded theory in particular “is more than a way of sifting, sorting and synthesizing data…[i]nstead [it] begins to unify ideas analytically because [the researcher keeps] in mind what the possible theoretical meanings of [the] data and codes might be.”  I have above characterized Grounded Theory as being an organic research and theoretical methodology, I would submit it’s also fair to say that coding – particularly in connection with employing grounded theory methodology –  seems to be a fairly organic approach to organizing and analyzing the data collected.  I consider, in fact, that this is an example of where a research/ theoretical methodological approach and an analytical/ organizational approach seem to be mutually supportive.