Ethics – Reflections Post #8

Ethics.  Ethics in qualitative research.  What can one say?  Since everyone’s ideas as to what is and is not ethical can differ – and perhaps more than a little bit – when considering “ethics” in the context of a particular profession or area (i.e. qualitative research), I strongly believe there should be some uniform guidance (dare I say, “directive”) and specified “norms” as to “appropriate” conduct for those within that particular profession or area.  We probably all agree on that.

The problem, however, is what happens when someone in that area or professions behaves in a way inconsistent with those guidelines.   In this regard, I think one of the most interesting things that came out of the discussion in class on this subject involved our various “takes” on how we would handle a particular ethical dilemma.  It was pointed out that there are layers of impact and considerations involved – that is, our determination as to how we would handle the dilemma may be fine from one level of impact (individually, for example, it might be fine for me to blow the proverbial whistle on an ethical breach on the part of a colleague) but cause significant problems – and change the analysis involved at arriving at a decision – at another level of impact (i.e. blowing the whistle on the colleague may negatively impact my entire institution or even my entire profession).   I’m suggesting here that even if there are established ‘norms’ of ethical conduct or behavior, the analysis of how to handle a breach of those norms is far from simple – and simply having “established” ethical norms may not be sufficient to handle these situations.   In this thinking, I am actually reflecting on my experience in law school (I know, I know… lawyers, ethics…).  We were required to complete a course in “ethics” – which, in point of fact, dealt primarily with personal conduct as opposed to professional dealings.  But, we were also required to take a course in Professional Responsibility – which is different.  PR dealt with what a licensed attorney’s obligations are when s/he discovers breaches in professional conduct – including, and particularly, ethical breaches (i.e. when you must decline to represent a client for conflict of interest…mishandling of client funds)- and when you are actually required (as one of the profs so eloquently put it) to “rat out” a colleague.  In essence, the MRPC (Model Rules of Professional Conduct) provide the type of guidance and structure I was alluding to above (and if there isn’t a rule on point to your specific situation, then there is governing body to whom you can apply for direction).   They impose not only sanction on the putative bad actor, but also provide sanction for those who don’t act in accordance with the Rules’ requirements when discovering a bad actor.  In short – the internal debate we might have had concerning the unethical researcher would be considerably shortened were it instead to involve an attorney not behaving in accordance with the MRPC.   Indeed, every licensed attorney must pass the MPRE (Multistate Professional Responsibility Exam) BEFORE they can even sit for the Bar (licensing) Exam itself.  Attorneys found not to have acted in accordance with the MRPC can be disbarred (i.e. their license to practice law is revoked).   Did I mention this is a bad thing??  As a matter of fact, there was a certain US President about 40 years ago who was involved in some questionable conduct around his reelection – and while he was never in fact charged (nonetheless receiving a “preemptive” pardon from his successor to preclude possible indictment) for the criminal conduct of which he was accused, he was a licensed attorney in NY and the NY Bar investigated his actions and ultimately disbarred him for violating these professional “ethics” or “conduct” canons with his actions. That was effectively the only punishment he received (other than, of course, a certain amount of ongoing public censure …) (lesson there:  don’t mess with the State Bar!!)

Mind, I do not necessarily subscribe to any concept of “deterrence”:  I do not for a moment believe that having these rules in place will stop unethical behavior – on the part of attorneys, or of researchers for that matter.  Still, however, I do believe that having “codified” professional ethics or responsibility rules provides the members of the particular profession  1) a common framework within which to analyze their own behavior and that of their colleagues in their professional dealings that is informed by the needs/ requirements/ special considerations of their particular profession; 2) guidance as to their specific obligations in the event they discover a breach by a colleague – essentially taking the decision from their hands for the most part, and thereby avoiding the type of quandary we discussed in class (again – for the most part); and (ideally) 3) a body monitoring compliance to which concerns can be raised for uniform review and advice.  In addition, I feel that even if it doesn’t in fact deter “bad actors” (or, rather, eliminate unethical behavior), this (or indeed any legal schema) provides at least a sense that they will themselves suffer negative consequences for their bad (read, ‘unethical’) acts, and that there is some recognition that the “harm” done is not only to the individual(s) involved but to the broader group (whether to fellow attorneys and their firms, or researchers and their institutions)

Off my soap-box now. ;>)  As regards IS and research, I am sure that, for example, IRBs are generally intended to provide this function – to a degree.  However, my sense at this point in the proceedings (and I could be wrong…) is that there is not a great deal of uniformity between and among IRBs, and that different “rules” apply depending on the specific institutions or foundations through which your research is being conducted.  While I understand and appreciate the sentiment behind the expression “consistency is the hobgoblin of small minds…”(or whatever….)  I do believe that consistency is hugely important when attempting to regulate behaviors and conduct within groups (particularly in the application and implementation/enforcement of the regulations/ laws/ rules).  Hence, to the extent my sense is correct concerning lack of consistency across research institutions etc. in terms of ethics or professional conduct/ responsibility requirements – I would like to see that change in the future.

Content and Discourse Analysis – Reflection Post #7

Once again, I am at a bit of a disadvantage in that I missed class and hence the related exercise.  Nonetheless, I will attempt to offer a few thoughts concerning content and discourse analysis and how they might be applicable in my continued adventures in IS…

Based on the work I have been doing this semester in the iSensor Lab concerning linguistic analysis and language-action cues, I firmly believe that “information” and cues that are available in a F2F setting (such as, for example, an in-person interview) are in large part lacking when we have only the raw “text” itself.  Just as an example, the appearance, body language and even tone of voice of the speaker are lost (at least for the most part  – the cool thing about linguistic analysis is that it attempts to supply some of these cues from within the text itself by way of examining word choice, phraseology, length of communication, wordiness (ahem…) and syntax … but I digress).  That is to say, the information gained in F2F interactions includes both verbal and non-verbal components, and it is difficult at best to “replace” the information – especially the non-verbal information – that is missing when only the “written” evidence of the interaction is available (even if originally in a F2F environment).   So, how much can we really extract from a writing?  It seems it would depend upon what we are attempting to look at in reviewing the writing.  The example of being able to observe court testimony “live”, or even watch a video recording of it, versus reading the transcript comes to mind.   In reading the transcript, we can read the (supposedly) exact words of the parties – But, unless the judge or counsel ask for a specific notation to be made in the transcript concerning the facial expressions or gestures of the witness being interviewed, these are lost when only the text remains.  Inflection and tone are also lost.  For us Rumpole of the Bailey fans, this is specifically dealt with in “Rumpole and the Show Folk, where Rumpole gives several different “readings” of the same “line” from his client’s (an actress) statement concerning the shooting of her husband.  As you can probably imagine, her words themselves – “I shot him.  What could I do with him.  Help me.” – were quite incriminating, on the page (you will note, there isn’t even any punctuation to supply tone or emphasis).    However, with his customary style, Rumpole deftly illustrates that those words could be imbued with considerably different meaning(s), and considerably different information (or data?) is imparted, depending on tone and emphasis.   Same applies to the “Delicious Death” quotation in this week’s lecture notes – When Miss Murgatroyd tries to tell Miss Hinchcliffe what she saw the night of the shooting at Little Paddocks, meaning (or at least interpretation) follows inflection/ emphasis:  She wasn’t there, she wasn’t there, she wasn’t there.  All this to say, if the objective is just to verify “what” was said (or written…) – without interpretation or looking for meaning – then content analysis is great.  It’s a fairly straightforward, unobtrusive means of collecting data and deriving inferences from writings through the process of coding various aspects of the text.  Taking a holistic view of the writing (or body of writings) being explored can even provide a certain degree of context as well.  If, on the other hand, the intent is to interpret or derive “meaning” in an objective sense, it’s a bit problematic.  Bottom line, I think, is that it is important to keep in mind the kinds of “information” that get lost just in looking at text or writing.  Providing these are not what you are looking for, then CA is fine.

Discourse analysis is a bit more challenging – but it does seem to get at some of the linguistic analysis I mention above, which is of interest to me.  As I understand it, it seeks to take a very holistic view of the communications (i.e. writings) in question – and considers both the internal context of the writings (i.e. the context or sense in which certain words or phrases were used) but also the external context of the writing (i.e. the historical/ socio-political factors shaping who was writing, what they wrote, and why).  Not sure this is a good example, but I have in mind how the socio-political circumstances in 17th century England (i.e. English Civil War) informed the writings of Hobbes and Locke – and influenced not only what they wrote about but how the approached/ treated it.  Just as meaning and information can be lost in the absence of the facial expressions, body language etc. available in F2F interactions, and having only the written account of it to go on , I firmly believe also that a good deal of meaning (and information) can be lost if we are not mindful of the broader “context” of the communication in this sense.  I guess in this sense, I see CA and DA as potentially being complimentary to each other- particularly depending upon the topic of investigation – with CA looking at the “micro” level of the writing and DA looking at it from a “macro” level.

Coding in Grounded Theory – Reflection Post #6

Although I missed class, and hence the in-class activity, I will nonetheless offer up a few general thoughts concerning this topic.  I very much appreciated the way in which Strauss & Corbin (1994) described it: a general methodological approach that “explicitly involves ‘generating theory and doing social research [as] two parts of the same process…,'”  and its emphasis on what I believe would fairly be called “organic” theory development wherein the data drives the development and conceptualization of the theory (i.e. a more inductive rather than deductive approach).  As someone who has been firmly entrenched in the Socratic method of argument and reasoning and trained to apply deductive approaches to reasoning, the inductive approach of Grounded Theory is clearly a completely different way than I am used to in terms of conceptualizing, analyzing and discoursing on questions and topics of interest.  I can also understand why Charmaz (2006a) contends (in “Invitation to Grounded Theory”) that the grounded theory approach and process to doing research will “..bring surprises, spark ideas and hone your analytical skills” (p.2), and “foster seeing your data in fresh ways and exploring your ideas about the data…” because data are collected from the beginning of the project with an eye towards theoretical analysis and development.  The researcher is thereby constantly forced to evaluate new data in light of existing data, and the synthesis (or, as Charmaz puts it ‘sense making’) of these data drives new constructs which in turn raises additional questions of interest concerning the phenomenon being studied, and leads again to the collection of new data to be incorporated into the developing construct.  For myself, while I will probably continue to reflexively revert to my deductive roots, I will nonetheless be mindful not only that deductive approaches are not the only approaches but that, particularly in terms of brainstorming and “thinking outside the box” about a problem, there is much to be said for taking an inductive approach – and particularly the approach outlined in Grounded Theory.

With all that said about grounded theory as a general research/ theory development approach, I will now offer a few thoughts specifically concerning coding as a data analysis tool vis-a-vis grounded theory.  Given the inherently iterative nature of grounded theory, I can certainly see where coding of data could be a particular challenge!  Since the data are, essentially, a moving target (constantly being updated and reinterpreted) – even at the time of “initial coding” –  assigning codes to each piece of datum and putting them into nice neat buckets or categories based on these codes (i.e. “focused coding”) would presumably be a constant exercise – whether because reevaluation of the data collectively leads to the conclusion that the original code no longer “fits”, or that a piece of datum doesn’t belong where it was initially “placed”, or because the original buckets themselves no longer “work”.  This, of course, recalls what Charmaz (2006a) indicated concerning the importance of “flexibility” in grounded theory in general – so it is rather difficult to imagine how any other analytical approach would work within the framework of grounded theory.  Indeed, as Charmaz (2006b) also says (in “Coding in Grounded Theory Practice”) – “We play with the ideas we gain from the data…Coding gives us a focused way of viewing (it)” and “Through coding we make discoveries and gain a deeper understanding of the empirical world” (p. 71).  Moreover, coding “gives us a preliminary set of ideas that we can explore and examine analytically… (and) if we wish, we can return to the data and make a fresh coding” (or, implicitly, revise the existing).  In short, coding in grounded theory in particular “is more than a way of sifting, sorting and synthesizing data…[i]nstead [it] begins to unify ideas analytically because [the researcher keeps] in mind what the possible theoretical meanings of [the] data and codes might be.”  I have above characterized Grounded Theory as being an organic research and theoretical methodology, I would submit it’s also fair to say that coding – particularly in connection with employing grounded theory methodology –  seems to be a fairly organic approach to organizing and analyzing the data collected.  I consider, in fact, that this is an example of where a research/ theoretical methodological approach and an analytical/ organizational approach seem to be mutually supportive.

Booth – Project Update 2 – “Laser Focused” (yeah, actually I did have to go there)

Booth – Project Update 2

My second update is simply “MTS” (more of the same): I have added another 12 entries to my annotated bibliography. Given formatting and other issues people have had in posting their previous update(s), I’m attempting to insert the actual file if you care to look it over.  The attachment is the full bib – The new entries are in bold.

In any case – as with last time – I’ll just recap and give the reader’s digest. As mentioned, I did find a number of additional journal articles on my chosen topic, focus groups. The highlights: I found one article that compares on-line and off-line focus groups in terms of depth and breadth of content as well as efficiency. I also found an article discussing the use of focus groups in a post-intervention situation to get a consensus of participants as to whether they feel the results of the intervention are sustainable. Finally, I found a couple of pieces discussing the analysis of the data collected from focus groups which, according to the authors at least :>), had been a fairly little-treated issue (I can certain confirm that the majority of articles I’ve pulled have had to do more with planning and running focus groups rather than on “what goes next”). These should all be interesting additions to my literature collection on this topic, and will hopefully enrich my final project: a lit. review on focus groups.

Reflection 5: Case Study.

Ok – Not at all sure this is right, but:

1. Ask a question: How effective are privacy regulations in protecting personal data?
This question can be examined through a few different lenses: the consumer/ private citizen, the government and business.

2. Pick a case: The EU Directive (Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data) could be a case in point. So – the research question would be “How effective has the EU Directive been in protecting personal data of citizens of the EU Member States?”

3. Describe what kind of case it would be, and discuss the types of data that should/ could be collected for analysis. One country within the EU – for example, the UK- could be examined with respect to pre- and post- implementation data concerning such issues as prevalence of theft of personal information (including identity theft, credit card and bank account hacking) by looking at claims and cases filed (including where filed – as a claim against the business/operation itself, or through legal process); public sentiment concerning how safe citizens feel their personal information is through, for example, polls, media (news and social media). Impact or effect on behaviors could also be examined in this way (i.e. have data sharing behaviors changed since the implementation of the EU directive? What changes in business operations have occurred since then?)

4. Types of analysis you should complete in order to assemble such a case study. As suggested above, there are a number of sources of information (both qualitative and quantitative) that should be analyzed when putting together a holistic picture of data protection effectiveness – if for no other reason than because there are a number of different stakeholder groups, and efficacy is measured relative to each of them (both subjectively in terms of their own specific feelings, but also objectively in terms of any specific goals achieved or milestones reached in terms of addressing specific concerns) and their specific interests as well as ‘overall’. These would include primarily written evidences, including document review of transcripts/ reports of interviews and/or focus groups that might have been conducted previously, as well as writings evidencing what might broadly be called “legislative intent” which can provide insight into what specific issues and concerns were discussed during the drafting of (in this case) the Regulation and whether (and how) these were in fact incorporated or addressed in the final Regulation.

5. Think briefly through how this would result in findings that are “different” from what they would be if you did a study on the same question but with more participants/sites/data sources and fewer data types.

a. More participants/sites. More sites/ participants in this context would mean looking at more countries in the EU (“Member States”). Since this particular directive is applicable in some 40+ countries, we would be better able to answer the research question set forth in #2 above the more Member States we analyze. The data sources would largely remain the same. That is to say, this would provide a very broad perspective on how effective the Directive has been across the entire body of EU Member States.
b. Fewer/ Different Data Types – In the examination of the UK above, for example, if we only looked at claims/ cases (and ignored media and other indicia of changes in behavior – whether personal or operational) – we would lose out on a segment of data that would highlight those concerns and/or incidents that perhaps didn’t rise to the level of the “victim” filing a claim or case – which are likely to be more prevalent than those that actually do rise to that level. Conversely, if we did away with claim/ case analysis and proceeded strictly based on polls and media, we’d lose evidence of how these arguably more serious incidents/ concerns are addressed in practice and what the thought processes and arguments are on both sides.

Interview Reflection post (#4)

My research question was “What types of information behaviors do people in engage in when selecting a restaurant when they go out for dinner?” Yeah, I know – very creative and inspired.  Not even sure what I picked it – since I never really go out myself.   Oh well – I guess it was the best I could come up with in the middle of a brain-freeze.

Anyway – Despite the less-than-inspired subject, I did glean a few insights into the  various interview techniques we discussed.  Perhaps the most interesting (at least to me) of these was that, although coming up with questions mirroring each “type” (unstructured/ semi-structured/ critical incident), in the end I found myself asking for essentially the same information.  So, for example, my “semi-structured” lead in question was “Please describe for me how you chose the restaurant you went to the last time you went out to dinner” – and I asked about typical problems in choosing, sources of info used for selection.  My unstructured question was “Tell me how you select the restaurant you to to when you go out to dinner.”  In speaking with my interviewee, I found myself almost reflexively asking the same sub-questions I had for my semi-structured and even my critical incident question.  While I certainly didn’t want to “lead the witness” in that instance – and I found myself suddenly being very conscious of that problem as I started speaking with her and trying to interact with her as she answered the question – I nonetheless found it at times to be necessary to “prompt” her a bit (just by way of interacting with her).  As can be imagined, because the semi-structured and critical-incident interview questions did not require this.

Speaking of the critical incident and semi-structured interview questions – I think as between the two I got basically the same “types” of response – that is, I don’t think I really got anything significant from the one that I didn’t get from the other.  This could be a function of the fact that, as it turned out, at least 2 of the 3 people I interviewed are like me in that they don’t go out to dinner very often (mostly special occasions).  From that perspective, I’d have to say that the unstructured interview was probably the most lively and interesting – and, although I did have to remind myself to “not lead”, I did find myself feeling freer to interact and discuss the topic with her, as opposed to sort of mechanically asking a series of questions.  I guess I’d have to say I thought that the “critical incident” method was next after that in terms of feeling more “interactive” with the interviewee.  I think that when you ask someone to recall a specific personal experience that something does click to help you connect with them.  As you might imagine from this discussion, I found the “semi-structured” approach to be probably the most mechanical and “rote” of these three techniques.

I suppose the conclusion I arrived at is that I can see how a blend of these approaches could be useful, even in one interview.  For example, you might start with some fairly unstructured questions, but then follow up later on with more specific critical-incident questions or semi-structured questions around the same subject to expound on the earlier discussion and, hopefully, serve as opportunities to ensure consistency of response.  I also would note that it became clear that, just because you have an unstructured interview question, it doesn’t mean you as the interviewer shouldn’t still have some prompt questions in your back-pocket!

Just for completeness – I’ll say a few words about the “flip side” in conclusion.  From the perspective of an interviewee, I think I also found the unstructured interview method a bit easier to “work with” – Sometimes when I was asked more specific/ direct questions I really couldn’t think of an example or a response right off and I could feel myself sort of tightening up a bit trying to think of “something” responsive.  In the “unstructured” question format (um – we can discuss that oxymoron later…) I just feel I could sort of get myself started and warm up to the subject a bit more easily.

Project Update #1

For my project, I chose to do a literature review on the subject of focus groups, specifically looking for “dos and don’ts” and suggestions for types of research questions to which they might be most effectively employed.

My first update is an annotated bibliography of 12 articles I have pulled thus far.  I have been particularly interested in several articles I found (some of which are included on the deliverable I submitted, and some of which I haven’t finished yet and will be on my next update) on the differences in face-to-face (F2F) focus groups and “virtual” focus groups (i.e. focus groups conducted by means of computer-mediated communications (CMC) modes).  Apparently some research has indicated that there is actually better interaction/ more questions asked and answers discovered during CMC focus groups than F2F… Hmmm!

Another subject I found quite a bit on is the importance of group interaction/ group dynamics in focus groups – particularly the management thereof.  Three of the articles I have pulled discuss this at length, and provide suggestions as to techniques that might encourage more participation.

Another thing that rather interested me is how many of the sort of “first tier” articles came from medical and nursing journals (not business journals, which is what I would have guessed…);  Also interesting to note that so far I’ve only turned up two different citations to articles discussing the application of focus groups in LIS (in point of fact, although there were two listings, in two different journals, they were effectively the same article by the same author… so I didn’t count them both.)  Another thing I noticed was the great variety of “types” of article I found.  They were, of course, all from peer-reviewed journals, BUT some of them were only a few pages – while some of them were more the length we would expect (20-30 pages or more) from a journal article.

Reflections Post 3 and Ethnography summary

I’ll start by pasting my account of my ethnographic field trip to Strozier:

I spent 20 minutes observing the check-out desk in Stozier Library, from approximately 2:40 – 3:00 on Thursday 2/5/15. I positioned myself on a couch across from the area. At this time, the desk was staffed by three individuals – 2 males and 1 female. M1 – who was stationed at the far-left desk – seemed to be the most senior. I surmise this because other staff asked him for help. Questions apart, though, M1 spent about 15 of the 20 minutes I observed him working with a single patron.   The female (F1), although occasionally appearing to be a bit bored, actually checked out more patrons than the other two (M1 and M2). F1 and M2 meanwhile busied themselves with pulling reserved textbooks (about 3 or 4 from what I could tell) for patrons, and putting things away (ruler, highlighter a patron borrowed…).   They seemed to be fairly friendly to each other; they didn’t seem to chat with M1 as much again adding to my suspicion that he is sort of the “boss”. All in all, there were approximately 10 patrons total, and approximately the same number of male and female patrons during this period.

*Addendum:  As mentioned, although one would assume that I was fairly “invisible” as someone sitting quietly in a library writing in a notebook, at the end of my 20 minutes, one of the library check-out staff (the “boss”) came over to me and said he’d noticed me sitting on the couch for “a while” and wondered if i needed anything or if I was just waiting for someone.  Thinking quickly (fortunately, it was at the hour, so I just said “I’m waiting for someone – it’s 3 so they should be here very shortly” – So, I was very nearly busted.

 

REFLECTION

Now for the “reflection” portion of this posting.  Thinking about the different experiences we all had in doing this exercise, it seems fairly apparent that you cannot take anything for granted in attempting an “unobtrusive observation” method of ethnographic study.  That is to say, one would have thought that I would have blended right in at the library (although perhaps I look too old to pass for a student studying in the library???)   – and yet, the “boss” seems to have picked up *something* that he felt was “not right” so he asked.  Perhaps folks are a bit sensitive after the incident last November… or perhaps, being a “library type” he really was just trying to be helpful.  On the other hand, witness Ana’s experience:  one would have thought that people would have given sort of a sideways glance at someone standing across from a construction site recording notes about her observation on her cell, and yet no one did!  So I guess the learning there is that you really can never tell how “unobtrusive” you are (or are not)….

Having said that (which was probably my biggest “ah ha” in this exercise) – I did find it an interesting experience in general to have to mindfully study a group of people and/or location for a period of time.  I have volunteered in libraries (even worked in one during law school), so I am obviously somewhat familiar with what I’ll call the “culture” of libraries, so to the extent I made any assumptions about what was going on a given patron, I will credit them as being somewhat educated or at least informed.  However, in order to be – if not unobtrusive, at least ‘less obtrusive’), I did have to distance myself somewhat from the area and the people I was observing, so I was unable to hear exactly what was going on – so all I am left with is “informed” assumptions and guesses.

One final point, then I’ll shut up:  concerning our discussion in class about giving physical descriptions of the subjects we’re observing.  My notes actually did include brief physical descriptions of each of the workers – but when I went to construct the narrative of my observations I intentionally left that out because I didn’t think it really added anything in this particular case…  Just wanted to throw that in for what it’s worth.

Reflection 2: Qualitative Methods

In thinking about “qualitative methods” as I understand it at this point, I suppose the first thing that comes to mind is how “squishy” and subjective it is – I suppose this is true both in concept and in operation, too.  Coming from a field that enjoys putting as concrete a face on things as possible, and looks for bright-line rules on which one may hang one’s hat, analyzing an issue or question subjectively is not exactly my comfort zone as such.  Not that I’m a whole-hog positivist by any stretch – but neither am I a gung-ho constructivist or naturalist.  I like to both explore AND explain.  Put simply, I am seeing more and more clearly that both quantitative methods and qualitative methods are somewhat incomplete just on their own – and find the case for at least considering the applicability of BOTH methods to a research problem or question to be more and more compelling. This is, I feel, especially true in my area of interest – which I will short-hand as “information policy concerning information rights”. I want both to be able to explore and understand the dynamics behind policy by collecting information on how people experience it, and to explain the “output” or “upshot” of these policies by measuring their real effects in terms of, for example, impacting information seeking and sharing behaviors – in the interest of trying to propose some new ways to rethink and recast policy to minimize its impact on our intellectual freedoms (specifically as expressed through our information seeking and sharing behaviors…).

Introduction and First Week Reflection Post

When I had to introduce myself during my MLIS program (at USF), I often referred to myself as a “recovering attorney”. I still rather think of myself that way. I had been a contract attorney for Marriott and for Citigroup for, collectively, about 12 years. The financial industry tanking as it did after ’08, clearly not a stable field to be in – so I decided to pursue other interests “just in case” – and that led me to the MLIS program. This is not to say that I’ve abandoned my legal background – indeed, I view my studies in (L)IS as enriching it and allowing me to fuse the two disciplines, particularly in the areas of data privacy, information security/ cyber security, and intellectual property (copyright was one of my areas of focus in law school) – which, per an article I came across last semester, I have decided to refer to collectively as “information rights” (anything that might tend to either inhibit or increase one’s propensity to share or seek for information). I am especially focusing on the policies/ legislation around these areas and how that contributes to the issue of information sharing/ seeking or not.  You’ll probably hear me blathering about this more than once this semester as we discuss and present…

Other than the foregoing – as my first year cohort knows all too well, I am a very proud dog-mom (I have a 5 year old Whippet named Robin who is my sweetheart.  We have the same birthday, too…).  I love Monty Python and pretty much Brit-coms in general. Huge mystery lover (Agatha Christie in particular, but also Ellis Peters… and of course, Arthur Conan Doyle).  I’ve recently gotten into “fractured fairy tales” as well – including some of the TV shows such as Once Upon a Time, and Grimm.  Long time Star Trek/ The Next Generation fan as well (Data is the man!!!)  I collect music of all types and have a fairly sizable collection – but I am most proud of my extensive soundtracks collection and some of the rare broadway “concept albums” and other obscure prints I have procured over the years.   I love history – Russian/Soviet and Roman in particular.  And I am sort of a self taught expert on Tudor England – which is my favorite period.  Elizabeth I is my personal hero and has been since I was about 12.

Brief biog to close:  I was born in CA, lived in TX, MA, went to school in WI (UW!!! U RAH ON) and at VU School of Law in Indiana. Then, as I mentioned, I went back to school after a number of years working (never mind how many…) to get my MLIS at USF (mainly because I was working in the Tampa area, and the campus was only about 20 mins from my office). When my department in Citi folded at the end of 2013 (we were “offshored”), I took the opportunity to take this new direction in both my personal and professional life and pursue my PhD!!

-Cheers
Cheryl