Thoughts on the British Library Digital Labs Symposium

This is a joint blog post written with three other students from City, University of London’s MSc Library and Information Science course.

The British Library Labs (BL Labs) Symposium showcases innovative projects that use the British Library’s digital content and data, and provides a platform for development, networking and debate in the Digital Scholarship field. (Programme for the Sixth British Library Labs Symposium, 2018)

Stephanie McMullan:
All too often we think of data as lifeless and uncreative. Reams of numbers, words and images being collected and flashing in front of us quickly and momentarily, being used to generate statistics and graphs to reflect on what has happened in the past.
However, as I left the BL Digital Labs Symposium three weeks ago I was struck by the creative purposes data was being used for; data is bringing new things to life. I found this most striking in the “Imaginary Cities” project by Michael Takeo Magruder. Magruder is using the British Library’s collection of historic urban maps on Flickr to create artistic, fictional cityscapes for modern audiences. The images will constantly change over time and using 3D technology, these cities will become accessible to the public through VR headsets for audiences to explore.
As we discussed in CityLIS week 6, information is needed for creativity and innovation. By making available digitised images from the British Library’s collections the BL has shown how the information libraries provide can be disseminated in new ways and be used to inspire new generations of artists, researchers and scholars. The BL Labs Symposium showed us this again and again.

Sarah Feehan:
The symposium showcased the seemingly limitless uses for digitised content from the British Library Labs. The four awards – comprising the categories Research, Artistic, Commercial and Teaching & Learning – gave the audience a fascinating insight into what I’m sure is just the tip of the iceberg.
The winner of the Teaching & Learning award was Jonah Coman with their Pocket Miscellanies: a collection of miniature zines which each provide a short lesson on a different aspect of Medieval visual culture, primarily featuring marginalised bodies that are so often missing from the historical mainstream media. The zines use images from the British Library Labs Medieval digital collection, which Jonah has permission to reproduce and make publicly available online as part of a free resource. However, the copyright of the images does not allow for them to be sold – so Jonah, an artist, has instead set up a Patreon so that this work can be supported, but without infringing on copyright law which prohibits the sale of the zines.
This is an area we touched upon in our morning session with Dr. Jane Secker in week 8, and which we will no doubt be revisiting soon as it is imperative information scientists and librarians understand it fully.

Susanne Trokhymenko:
I believe the purpose of knowledge is to create meaning in our lives, and true meaning can only be achieved if something is passed on to others to enrich and benefit their lives. A product, in this case the digital content and data of the British Library, is truly meaningful when it keeps on giving.
At the British Library Symposium, not only was I amazed at the winners of the awards but at the whole range of finalists. The use of the digital content available has been such a source of inspiration for many. From research awards demonstrating the development of new knowledge to artistic awards, and from commercial to teaching and learning awards exhibiting the creation of quality learning experiences, it was truly remarkable to see the innovation and use of the BL content for sharing the knowledge and cascading it further afield.
From collections being used for a fashion show – designer Nabil Nayal researched his PhD in Elizabethan Dress – to Jonah Coman’s zines (see above), the awards celebrate and recognise creativity, and encourage international collaboration (Pocahontas and after) with effective and exciting research and activities which enhance the Library’s digital content. All in all: Go BL! Can’t wait for next year…

Tim Darach:
During Daniel Pett’s talk at the BL symposium I was struck by how unreal the 3D objects looked up on the screen, floating phantasmal as if waiting to be picked up in a computer game.
I was expecting something more realistic, after all they were made from photographs! But at that point I didn’t understand what these 3D objects were or how they are made.

During the break I put on a VR headset and tried to walk around a lion – nearly knocking over an expensive looking camera in the process.

What I’d experienced was actually a model – based on measurements derived from photographs (photogrammetry) – with colours and textures copied from the photos stretched over the top of it, bulging ever so slightly like a stuffed animal in a museum. These 3D models are an interpretation – like archeological line drawings – except they are generated algorithmically rather than with a human eye. (The person producing the drawing has to decide where the boundaries are – where one feature ends and another begins – whereas for the 3D model this is done mathematically.)

Detail of Roman bust in 3D modelling software
Wireframe detail from a PhotoScanPro 3D model of Roman bust in the Fitzwilliam Museum, Cambridge.

The idea of copies allowing an enhanced experience is not new, see for example the plaster casts that populate the Victoria and Albert museum’s sculpture gallery and other 19th century institutions:

“In the mid nineteenth century new casting techniques allowed for the production of huge architectural fragments. Well-selected collections could ideally display perfect series in galleries in which the visitor could wander among monuments and experience architecture history on full scale (Lending, 2015).”

Although in this case the physical properties of the materials used to make the cast and the model mostly defined the outcome.

Viewing these 3D models on a screen or through a headset enhances the isolation of the artefact – that they can be objectified, can be seen on their own out of narrative context. We have been trained to accept the surreal juxtapositions of objects in museums and other cultural institutions, so in a way 3D modelling is another facet of that process.
The interposition of explanatory text and curatorial order is supposed to mitigate this surreality – or even disguise or rationalise how the artefacts came to be on display in a grand building. (Further complicated by our tendency to think we are looking at something authentic, whereas the provenance of many artefacts is in tension with this notion: such as restorations; reconstructions; fakes; copies made by the original craftsmen for collectors; authorised copies; versions made specifically for museums but left incomplete for cultural reasons…)

I wanted to learn more about the modelling process so I went to the Fitzwilliam Museum Photogrammetry workshop last week and there Daniel Pett taught us how to use modelling software to make 3D models from artefacts we photographed in the museum:

3D Model of theatre head
Click on the image above to view a 3D model of a stone head from the Fitzwilliam Museum.

If you swivel the model around you will see there is a hole in the top – you can look inside at the back of its face – I like this flaw; it is a reminder that you aren’t looking at reality.


Programme for the Sixth British Library Labs Symposium. 2018. [ebook] p.1. Available at: [Accessed 2 Dec. 2018].

Lending, M., 2015. Promenade Among Words and Things: The Gallery as Catalogue, the Catalogue as Gallery. Architectural Histories 3, np-np.



Autumn leaves, disordered in drifts – ethics?

In order to prepare for the DITA essay I’ve been reading around the subject of ethics, loosely centred on information ethics. In this post I’m going to focus on Luciano Floridi’s Information Ethics (IE).
One way that you can come to understand a theory is through discussing it, getting things wrong and being corrected, arguing your point again and so on. When you are having that discussion in your own head it is hard to be wrong, and the difficulties are compounded when you initially don’t understand the field: the meaning of terms within a text are defined by the whole and the whole defined by its parts – the hermeneutic circle (Mantzavinos, 2016).
When nothing clicks it is difficult to know whether you just haven’t understood or whether you don’t share the author’s worldview.

Cherry leaves, autumn 2018. © Tim Darach Cherry leaves, autumn 2018. © Tim Darach

Floridi’s IE

“What is good for an information entity and the infosphere [information environment] in general? This is the ethical question asked by IE (Floridi, 1999).”

Floridi envisions IE as a “macroethics” (universal ethics), because by lowering “the condition that needs to be satisfied, in order [for an entity] to qualify as a centre of a moral concern, to [the entity’s] information state” IE can encompass all entities (Floridi, 1999).

And he means all entities: “not just all persons, their cultivation, well-being and social interactions, not just animals, plants and their proper natural life, but also anything that exists, from paintings and books to stars and stones; anything that may or will exist, like future generations; and anything that was but is no more, like our ancestors (Floridi, 1999).”

What is the purpose of this universal ethics?
I find Floridi unclear on this, Terrell W. Bynum says that IE is needed because traditional ethics do not account for the feeling of respect that humans have for nature and, because of their human focus, they are not suitable for analysing the actions of artificial agents eg computer algorithms (in Floridi, 2010).

Now for IE to be any use it has to be able to pluck an information entity out of the infosphere, to make it finite and coherent.

To do this Floridi grounds IE in an information structural realist ontology and using techniques adapted from computer science, and applied to information systems, tries to formalise the relationship between objects and theory (Floridi, 2006, 2011) (Yao, 2016). In other words to use IE you have to buy into Floridi’s whole philosophy of information package – IE doesn’t work without his “Method of Abstraction”.

How do we decide what is beneficial for an information entity?
For Floridi “there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy (Floridi, 2006).”

But this is not entropy as we know it – neither the entropy of thermodynamics nor of information theory – it is “a meta-physical term and means Non-Being, or Nothingness […]. Metaphysical entropy is increased when Being, interpreted informationally, is annihilated or degraded (Floridi, 2008).”

(So for example the giant meteorite that slammed into Earth 66 million years ago and caused the mass extinction of the dinosaurs, vastly diminished the Cretaceous infosphere, and consequently would be an “instance of evil” in IE. But no-one would describe that event as evil. In fact according to Paul Taylor of the Natural History Museum, “extinction has been just as important as the origination of new species in shaping life’s history (Taylor, 2004).”)

At this point I think its a good idea to look at one of the examples Floridi uses elucidate IE:

“Imagine a boy playing in a dumping-ground. Nobody ever comes to the place. Nobody ever uses anything in it, nor will anyone ever wish to do so. There are many old cars, abandoned there. The boy entertains himself by breaking their windscreens and lights, skilfully throwing stones at them. He enjoys himself enormously, yet most of us would be inclined to suggest that he should entertain himself differently, that he ought not to play such a destructive game, and that his behaviour is not just morally neutral, but is positively deprecable, though perhaps very mildly so when compared to more serious mischiefs. In fact, we express our contempt by defining his course of action as a case of ‘vandalism’, a word loaded with an explicitly negative moral judgement (Floridi, 1999).”

After discussing how other ethical theories fail to capture the immorality in this example Floridi says:

“We come then to IE, and we know immediately why the boy’s behaviour is a case of blameworthy vandalism: he is not respecting the objects for what they are, and his game is only increasing the level of entropy in the dumping-ground, pointlessly. It is his lack of care, the absence of consideration of the objects’ sake, that we find morally blameable. He ought to stop destroying bits of the infosphere and show more respect for what is naturally different from himself and yet similar, as an information entity, to himself. He ought to employ his time more ‘constructively’ (Floridi, 1999).”

Commenting on this example Richard Volkman says it is “a profound error to suppose that these things “really are” windshields from the point of view of the universe. The boy treats them as targets. In fact, the story posits that no one will ever treat them as windshields ever again. In that case, why should that description be privileged? […] But in that case, there can be no way to respect this entity for what it is and promote its flourishing, since its flourishing under one ontology excludes its flourishing under another (Volkman, 2010).”
(Interestingly Floridi replies to Volkman by saying that he is playing “rhetorical games” with Floridi’s simple example, and then rather disingenuously quotes from his 1999 paper leaving out the sentence that starts “We come then to IE, and we know immediately why the boy’s behaviour is a case of blameworthy vandalism”!)

One could also use Floridi’s “entropy” argument against his own conclusion: the informational entities that the boy makes by throwing stones – such as new muscle memory, neural pathways and especially memories – easily counterbalance the loss of a few windscreens; the elements of which are actually just being returned to their natural state.
And Floridi does envision this kind of inverse relationship between information and entropy (between being and nothingness) similar to that in information theory:

“In IE, we still treat the two concepts of information and entropy as having the same inverted relation, but we are concerned with their semantic value: for example, as the infosphere becomes increasingly meaningful and rich in content, the amount of information increases and entropy decreases, or as entities wear out, entropy increases and the amount of information decreases (Floridi, 1999).”

(Though frankly I cannot conceive of how one would weigh up semantic content in this way – which contains more meaning: autumn leaves lying disordered in drifts along a street or the same leaves swept up and neatly packed in bin bags?)

But Floridi doesn’t choose to examine the situation in this way, and despite decrying the anthropocentrism of other ethical theories, he opts to blame the human agent instead of the broader network of human action. And it is only when you broaden the network of actions under consideration that the ethical exercise becomes useful, one in which you ask real questions about how the situation arose.
Such as, isn’t the greater act vandalism dumping these cars in the first place?
Or to question the process by which the windscreens and the cars came to be: again the pollution created during their production vastly out weighs the boy’s destructive act.
Or question the oppressive infrastructure of car culture, which using IE could be described as a vast act of vandalism.

Floridi could argue that IE has actually done its job, because the questions I’ve asked can exist under the umbrella of IE, but what lies at the root of these questions is a challenge to the status quo. And I think the “information properties” that Floridi uses to define the nature of the infosphere reveal its static nature, that they tend towards homogeneity. For example #4 – persistency – is a good quality, but without change persistency leads to stagnation, and its countervailing “evil” entropic properties – volatility, transitoriness, ephemerality – are necessary for creativity (Floridi, 1999). None of the “good” properties of the infosphere are dynamic – perhaps fertility and richness – and is order really “good” and disorder “evil”? Floridi would have his knowledgable gardeners in “creative stewardship” over an orderly infosphere – this is not how life proliferated on earth.

Richard Posner, Chief judge/Chicago Law School, writing about moral theory says:

“Something in the nature of the academic enterprise causes the values of variety and heterogeneity too often to be overlooked. The oversight is particularly serious in the domain of morality. A uniform judiciary would not be a national disaster; moral uniformity might well be. A society of goody-goodies, the sort of society implicitly envisioned by academic moralists, would not only be boring; it would lack resilience, adaptability, and innovation. A society of Jewish or Islamic fundamentalists, Nietzschean Ubermenschen, or Japanese samurai would not be dull, but it would be brittle, frightening, and perilous.”

Finally I want to briefly consider Floridi’s inclusion of artificial agents – such as computer algorithms – as moral patients and agents, able to perform good and evil acts (Floridi and Sanders 2004). Expanding what constitutes a moral agent isn’t necessarily a bad move – indeed notions such as joint enterprise and corporate manslaughter exist in UK law for example – but I think there are some issues with Floridi and Sanders’ approach.

Without a human presence – as part of an aggregate agent – there is no locus for intent, premeditation, choice or responsibility; the result of which is the dilution of the concept of morality (Capurro, 2008).
An artificial agent does not understand what it does, the WannaCry ransomeware that infected NHS computers did not choose to do so, it did not understand the consequences of doing so, it had no moral code, nor a conception of evil, but its human programmers did. To say that artificial agents can be evil devalues the term.

By just focusing on the morality of artificial agents alone, Floridi ignores the human choices that were made in their development, even if a negative outcome was unintentional (Franssen et al., 2009). I think a much more productive approach is to see that “technology enables (or even invites) and constrains (or even inhibits) certain human actions and the attainment of certain human goals and therefore is to some extent value-laden, without claiming moral agency for technological artifacts (Franssen et al., 2009).”

Will holding artificial agents accountable encourage or deter people/corporations/governments who knowingly design algorithms to commit immoral acts? Or is it better to call to account those involved in the creation of such algorithms? Is a universal ethics the best way to achieve this?


Capurro, R., 2008. On Floridi’s metaphysical foundation of information ecology. Ethics and Information Technology 10, 167–173.

Floridi, L., 1999. Information ethics: On the philosophical foundation of computer ethics. Ethics and Information Technology 1, 33–52.

Floridi, L., 2006. Information ethics, its nature and scope. ACM SIGCAS Computers and Society 36, 21–36.

Floridi, L., 2008. Information ethics: a reappraisal. Ethics and Information Technology 10, 189–204.

Floridi, L., 2010. The Cambridge Handbook of Information and Computer Ethics. Cambridge University Press, Cambridge.

Floridi, L., 2011. The philosophy of information (Online). Oxford University Press, Oxford.

Floridi, L., Sanders, J.W., 2004. On the Morality of Artificial Agents. Minds and Machines 14, 349–379.

Franssen, M., Lokhorst, G.-J., van de Poel, I., 2009. Philosophy of Technology. The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.),

Mantzavinos, C., 2016. Hermeneutics.The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.),

Posner, R.A., 1998. The Problematics of Moral and Legal Theory. Harvard Law Review 111, 1637–1717.

Taylor, P.D., Cambridge Books Online Course Book EBA, 2004. Extinctions in the History of Life. Cambridge University Press, Cambridge.


Yao, Y., 2016. A triarchic theory of granular computing. Granular Computing 1, 145–157.

Follow the data

The title for this post comes my attempt to gain an insight into the history of computing. I looked at how people used the technology of their time in order to process large volumes of information. I started out by seeking historical precedents for our current over-abundance of data:

“From Domesday Book and the beginnings of the English public records to the new forms of direct and indirect taxation of the fourteenth century, the pursuit of information had been integral to medieval government, and it was spectacularly energetic, closer to the seventeenth century, in the ‘age of plunder’ under the early Tudors.” (Slack, 2004)

So by no means the first example of information gathering in England is John Graunt’s Natural and Political Observations on the Bills of Mortality, 1662, but it was “the seminal work which effectively founded not only political arithmetic [statistics], but also social statistics and historical demography” (Slack, 2004). Graunt collected and tabulated the historical records of births and deaths, the Bills of Mortality, from each London parish.

A page from the Yearly Bills of MortalityAn example from 1715 of the tabulation of Diseases and Casualties – the process of collection and publication continued after Graunt’s death in 1674.
Note that one person was “planet struck”, killed by malign celestial influence; probably attributable to the total solar eclipse that took place that year.

Graunt’s inclinations are interesting, in answer to the question he poses himself:
“It may now be asked, to what purpose tends all this laborious bustling and groping to know…”
He gives this answer:

“…I might answer, that there is much pleasure in deducing so many abstruse and unexpected inferences out of these poor despised bills of mortality…” That is, for the love of knowledge, but then more seriously: “Now, the foundation or elements of this honest harmless policy is to understand the land, and the hands of the territory, to be governed according to all their intrinsic and accidental differences[…] I conclude that clear knowledge of all these particulars, and many more, whereat I have shot but at rovers, is necessary, in order to good, certain, and easy government […] But whether the knowledge thereof be necessary to many, or fit for others than the sovereign and his chief ministers, I leave to consideration.” (Graunt, 1662)

I now turn to Carl Linnaeus, eighteenth century Swedish naturalist, because of his use of paper based technologies – in particular paper slips resembling index cards – to organise his botanical and zoological specimens (Charmantier, 2014).
Linnaeus built up a large network of correspondents and former students – “apostles” – around the world, who sent him botanical samples. He was an “entrepreneurial genius in organising complex information networks in a peripheral European power” (Sörlin, 2006).
His networking and information gathering were dependent on, and part of, the increasingly globalised systems of the eighteenth century: postal communication, the ships and the offices of overseas trading companies, and government sponsored exploration (Charmantier, 2014).

He was so successful that it reached a point where he was overloaded. One of the strategies Linnaeus used to deal with the large amount of information he received in the last years of his working life – 1767 to 1773 – was to record information about the samples he received on slips of paper, similar to modern index cards. (Charmantier, 2014)

Linnaeus' paper slip for Cocoa plant (theobroma)
An example of one of Linnaeus’ Paper Slips (1767 – 1773), for Theobroma cacao the cocoa tree, the seeds of which are processed to make chocolate &c..
(Apologies to The Linnean Society for not using their website’s embed feature; the WordPress editor truncates the code.)

Actually, one of Linnaeus’ students, Daniel Solander, preceded Linnaeus in realising the practicality of using slips of paper to organise information. Solander moved to England and in 1763 was employed to catalogue the British Museum collection, where he used paper slips. After Solander died subsequent curators continued to use the cards; they were no longer the work of one individual, but represented the work of the institution (Charmantier, 2014).

Right, we are a long way from the computer, but one of the major computer companies, IBM, originates in the late nineteenth century, tabulating census data using punch cards, that is index cards with holes in, and the initial idea came from a librarian.

Young engineer, Herman Hollerith, went to work for the U.S. Census office in 1879, where as a “distraction” from his other work he computed life tables for Dr. John S. Billings. While doing this “his attention was drawn to the need for mechanical aids in census tabulation” (Hollerith, 1971). For the 1880 census 1,495 clerks produced 21,000 pages of reports from the census data using the tally system – whereby each census form was examined individually and check marks put in the appropriate box on a tally sheet again, again, and … &c. (Campbell-Kelly, 1996).
Hollerith’s daughters recall that their father got the idea for how to mechanise the tabulation process from Billings (a librarian of renown) who “suggested using cards with the description of the individual shown by notches in the edge of the card” (Hollerith, 1971).
His idea was to “record the census return for each individual as a pattern of holes […] on a set of punched cards […] It would then be possible to use a machine to count the holes and produce the tabulations” (Campbell-Kelly, 1996).

U.S. Census Bureau machine and operator
U.S. Census Bureau machine and operator, circa 1908, photo by Waldon Fawcett.
(The dials record the totals.)

“The 1890 census was processed in two and a half years compared to seven for the previous census” (Campbell-Kelly, 1996).
So although initially developed to deal with a specific problem punch card tabulators then went on to be “used for processing large amounts of data in many business firms during the first half of the twentieth century” (Yates, 1993).


Birch, T., 1759. A Collection of the yearly bills of mortality, from 1657 to 1758 inclusive. Together with several other bills of an earlier date.

Campbell-Kelly, M., Aspray, W., ACLS Humanities E-Book, 1996. Computer: a history of the information machine, 1st ed. Basic Books, New York.

Charmantier, I., Müller-Wille, S., 2014. Carl Linnaeus’s botanical paper slips (1767–1773). Intellectual History Review 24, 215–238.

Graunt, J., , Natural and political observations mentioned in a following index and made upon the bills of mortality. 1662

Hollerith, V., Hollerith, H., 1971. Biographical Sketch of Herman Hollerith. Isis 62, 69–78.

Slack, P., 2004. Government and Information in Seventeenth-Century England. Past & Present 184, 33–68.

Sörlin, S., 2006. Science, Empire, and Enlightenment: Geographies of Northern Field Science. European Review of History: Revue europeenne d’histoire 13, 455–472.

Yates, J., 1993. Co-evolution of Information-Processing Technology and Use: Interaction between the Life Insurance and Tabulating Industries. Business History Review 67, 1–51.

Blistered fingers

I had wanted to write a post to get my eye in – I have it sketched out – but I have pre-course nerves and so decided to do something simple and relaxing instead: to pick out some of websites I’ve saved to Pocket over the years with a LIS gaze.

(I’m doing this after spending a couple of hours in the rain at the allotment, picking runner beans and pick-axing a 4′ x  16′ bed for over-wintering broad beans. Hands thoroughly blistered.)

I did consider whether I should be using an official citing format, but opted for a simple title link followed by descriptive text.
And I wanted to try out the blogging software, I have had this blog for many years, but never took to blogging. (Originally I’d planned to use it to show work in progress on sculptures I was making).
I may need to be more curatorial if the list gets too long, or perhaps just stop! It’s interesting (for me) to see how the sites I’ve saved have changed over time. The Rise and Demise of RSS

There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development. Identifying People by Metadata

You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information, by Beatrice Perez, Mirco Musolesi, and Gianluca Stringhini.

Quartz: Google and Amazon’s move to block domain fronting will hurt activists under repressive regimes

Google and Amazon have blocked “domain fronting,” a method that allows developers to mask their traffic online and get around state-level internet blocks.

Stanford Encyclopedia of Philosophy: Phenomenological Approaches to Ethics and Information Technology

It seems obvious that a world with information technology is somehow different from a world without information technology. But what is the difference? Is it a difference of degree (faster, closer, clearer, etc.) or is it a difference of kind? Does technology shape society or society shape technology, or both shape each other? What is the nature of this shaping? Is it in practices, in ways of thinking, or is it more fundamental?

The Atlantic: The Era of Fake Video Begins

The digital manipulation of video may make the current era of “fake news” seem quaint.

Atlas Obscura: The Early 20th-Century ID Cards That Kept Trans People Safe From Harassment

Katharina T., a resident of Berlin in the early 20th century, had a deep voice and masculine appearance, and preferred to wear men’s clothing at home and in public. In 1908, they—there’s no record of which pronoun Katharina preferred—went to visit the sexual reformer and “sexologist” Magnus Hirschfeld, to apply for official documentation that would allow them to wear men’s clothing in public: a “transvestite pass.”

Atlas Obscura: Alphonse Bertillon’s Synoptic Table of Physiognomic Traits (ca. 1909)

Oreille droite.

Alphonse Bertillon’s Tableau synoptic des traits physionomiques was essentially a cheat sheet to help police clerks put into practice his pioneering method for classifying and archiving the images (and accompanying details) of repeat offenders, a system known as bertillonage. BIOHACKERS ENCODED MALWARE IN A STRAND OF DNA

…a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer.

Atlas Obscura: Mapping Ireland’s Sheela-na-gigs

Sheela-na-gigs can be found all over the country, but it’s only recently that academics have been brave enough to study them.

Atlas Obscura: This Bot Generates a Fantasy World Every Hour

The Uncharted Atlas is a collection of maps of imaginary lands.

NGA Advisory Notice on “Web Mercator”

The NGA Geomatics Office has assessed the use of Web Mercator and other non-WGS 84 spatial reference systems may cause geo-location / geo-coordinate errors up to 40,000 meters. This erroneous geospatial positioning information poses an unacceptable risk to global safety of navigation activities… Many popular commercial visualization, mapping and mobile web device applications use a Web Mercator spherical earth reference.

feuilleton: Science Fiction Monthly

Recent uploads at the Internet Archive include an incomplete run of British magazine Science Fiction Monthly, a large-format collection of SF art and original fiction that ran for 28 issues from 1974 to 1976.

Introducing Open Access at The Met

As of today, all images of public-domain works in The Met collection are available under Creative Commons Zero (CC0). So whether you’re an artist or a designer, an educator or a student, a professional or a hobbyist, you now have more than 375,000 images of artworks from our collection to use, share, and remix—without restriction. This policy change to Open Access is an exciting milestone in The Met’s digital evolution, and a strong statement about increasing access to the collection and how to best fulfill the Museum’s mission in a digital age.

Atlas Obscura: The Stunning Early Infographics and Maps of the 1800s

Educational diagrams of scientific discoveries, from the moon’s surface to the longest rivers.


A new tool for demonstrating the qualitative effects of nuclear weapons.