This blog has been written as the first part of the assessment of the DITA module for CityLIS.
I’m 2 weeks into my new masters now and already so much information has been thrown our way! One of the modules we’re doing this term is Data Information Technologies and Applications (DITA). A bit of a mouthful and from the first two sessions it seems the module will be just as jam packed with topics as its title.
Once the introductory part had been dealt with last week we dived straight into what is going to be a recurring theme throughout this masters, what are the ethical implications of our increasingly digitised world and how, as LIS professionals, should we engage with these? A post by David Beers acted as a jumping off point for a wide ranging discussion of the political implications of a digital world. A major point that came up (which may or may not be related to the fact that a number of the cohort are or have been teachers) is the issue of data/digital/information literacy. Are these the same thing? Who should be responsible for teaching it? Are any of them the same as statistical literacy? These are not new questions but no wider consensus seems to have been reached about them. An interesting recent blog post from the ACRL suggested that in defining information literacy we need to teach people to ask:
“Can we meaningfully discern the human purpose (and, frequently, the human negligence) lying behind the information artifacts that occupy so much of our lives? How do our information choices make us more (or less) fully human?”
This seems to me a reasonable question, but one that does not touch on the numerical comfort that can put people off engaging with information when presented as data (e.g. as a statistic). A relevant book that I’d definitely recommend if you’re interested in the importance of statistical literacy is “The tiger that isn’t” by Andrew Dilnot and Michael Blastland which I heartily recommend even to those of you who think of yourselves as ‘not a mathsy person’.
Given the increasing prominence of climate change in the news it was fitting that this week we moved onto the potential environmental footprint of data creation, collection, and processing. It seems unlikely that as a society we’ll step back from digitisation and increased technological infrastructure (afterall the Luddites aren’t remembered for their success), but it does behove us to think more deeply about what is being digitised and stored, what is being done with this data, why, and by whom? Again the question of education was raised, a number of the class confessed they hadn’t really thought about what is being collected in the background (all that user metadata described so well in Jeffrey Pomerantz’ Metadata) and where everything in ‘the cloud’ actually is. And even now that it’s been brought to our attention what can we do about it? One group pointed out that when you download a new mobile app it doesn’t tell you what the carbon footprint of using it is. Maybe what we need is a digital version of the white goods energy ratings system?
We were then given a brief history of computation and the internet. A small thing that stuck in my head from that was Lyn talking about FTP as though it was an obsolete protocol, as I was using it just a couple of months ago in my job! For the average internet user these older technologies have maybe been superseded by HTTP but for those who need to transfer huge data files via the internet FTP is still very much a part of the standard tool kit. Depending on what roles we go onto after this course it’s important to remember that the users we’ll may end up supporting will all have different information and technological needs and toolkits, and not all of them will be using most common or current (shout out to the wide range of researchers still having to learn Fortran!)