There are things I wish students knew before they emailed me or dropped by my office to ask me, awkwardly — and I get the awkwardness, to write them a letter of recommendation. This comes from having had a lot of these encounters and having written hundreds of letters.
First, an email inquiry is fine. It’s awkward to ask, and doing it in writing is easier than doing it in person. In fact, it’s better because you can organize yourself better and the whole thing can be briefer for the recommender than a face-to-face meeting: I can read your email in less than 15 minutes and that includes replying to you that I am happy to write a recommendation for you.
Here are some things that make it easier for me, and other faculty, to say yes:
Be polite, even a bit formal. You are asking someone to spend an hour thinking about and writing a letter which will, with luck, convince some other person to say “yeah, we should admit this person to our program.” (It all runs on people, and so be nice, and respectful, to the people involved!) Part of that respect is respecting the time of the person you are asking to write a letter of recommendation and that means giving them as much information as you can upfront:
Make sure to mention the class(es) you took with them and what made the class memorable. Details matter here. An anecdote brings things to life. Write a one-sentence summary of the class from your point of view. And then write a sentence that summarizes a big or final project that was your contribution to the class. (I always like remembering student essays, but I do need to be prompted.)
Make sure to explain what programs, or kinds of programs, to which you are applying and why you are applying. Just a sentence or two to describe to the letter writer why you have chosen these programs and not others. Again, details matter. If you have a statement or purpose already drafted, feel free to copy and paste from that. (We will talk about the whole document in a moment.)
That’s it. You’re done. You’ve given your possible recommender sufficient information to make an initial decision with greater comfort and awareness than simply asking “would you write me a letter of recommendation?” (I’ve gotten a lot of emails just that short, leading me to develop the list above!)
Finally, offer to send a follow-up email with the following documents attached: draft statement of purpose, a transcript (unofficial is fine), a resume (if you have one) as well as a copy of that big project. This lets your potential recommender know that you’ve done your homework and are ready to make their task as easy as possible: writing a letter of recommendation when you can talk about a student’s overall course work (transcript and, possibly, resume) as well as the particular course(s) they took with you (copy of project[s]) and their future hopes and prospects (statement of purpose) makes for a much stronger recommendation. Readers of these documents are used to superlatives in the first paragraph of a recommendation letter. What they are looking for is how a letter writer substantiates those superlatives (or doesn’t).
I hope this helps! Good luck in your search, and I’m glad you are thinking about asking me to write a letter of recommendation for you.
Someone asked me about audiobook options recently. I have been a paid subscriber to Audible in the past, both before and after its acquisition by Amazon, and I have also listened to audiobooks on Spotify and via Libby. (And there have been more than a few podcasts that ended up being audiobooks via long-form audio theater — is that how to describe what used to be radio theater?)
Doing a bit of spelunking on the web, and polling a few people, I came up with the following:
Free Options
Your local public library remains the best way to listen to audiobooks at little to no cost. I am listing Libby here, but there is also CloudLibrary. I have used Libby and really liked it. I have used CloudLibrary and really not liked it: the app seems to need to refresh itself every time you open it and the selection available, at least through my public library in the south, is very limited. There is also LibriVox / Project Gutenberg which offer free audio recordings of public-domain classics (like Pride and Prejudice or Sherlock Holmes), but I have not checked them for production quality.
Low-Cost Subscription & Rental Models
If your library doesn’t have what you want, the lower-cost paid routes include Audible Plus, Spotify Premium, and Everand. As I noted at the top of these listings, I have been an Audible subscriber and I developed a fairly good catalog during that time. I no longer am a premium subscriber, but I keep my eyes open for intermittent sales, where I often pick up a book or two for $5-$8 each. I have had Spotify Premium, and its audiobook listening model is fundamentally broken: it only allows the account owner access to audiobooks and it has a randomly chosen cut-off point of 15 hours. (It was 10 when I degraded my subscription to remove the “free” audiobook option.) A sensible unit would be one book. Maybe that book is 8 hours; maybe it’s 16 hours. I’m pretty sure it would average out. Either way, you would not find yourself in the middle of a long-drive and the novel you started coming to a sudden stop.
Low-Cost Purchases
If you prefer to keep your audiobooks forever but don’t want to pay retail prices:
Chirp: This site specializes in massive “flash sales” on audiobooks (often $1.99–$4.99) with no monthly subscription required.
Audible Credits: If you sign up for the annual plan (12 or 24 credits at once), the cost per book drops to roughly $9.50–$11.00, which is far cheaper than the $30–$40 retail price.
Libro.fm: Matches Audible’s pricing ($14.99/mo for one credit) but gives the profits to an independent bookstore of your choice.
And I don’t think this note would be complete if I didn’t mention that if you are shopping on the 800-pound gorilla’s website and you see a book you want, check if the Kindle version is cheap or on sale. Often, buying the Kindle book for $0.99 or $1.99 allows you to “Add Narration” (Whispersync) for a discounted price, which is frequently cheaper than an Audible credit.
Some time ago I sat down to try to think of a way to visualize how choices work. I sketched out a branching network, much like the image below whose origin I have long forgotten. (My apologies to that creator. And my thanks.) The point I was trying to make, in my head, to my daughter was that, yes, of course certain choices preclude certain possibilities, but none of the branches necessarily are closed in the future.
Beyond that simple idea, what I was really trying to find a way to articulate, and to visualize, is that certain places offer you more choices, more branches, more paths, than others. One of the things you can aspire to do, if you value having more choices, is to identify these rich nodes in your life path network and to find ways to get there. Is there a school you need to attend? An organization for whom you need to work or volunteer? A book you need to read?
It goes without saying that few of us can ever know for certain, but it struck me that, with some guidance from elders (like me, like me!) younger people could perhaps position themselves in a way as to maximize open paths. (I hesitate to call them opportunities.)
I never did have this conversation with my daughter, but I have thought about the idea off and on for five or six years now. I keep hoping I will have some great insight, and maybe I will.
The place of English departments has already changed to a large degree, with many faculty simply not aware of it. Undergraduate enrollments are down across the country, perhaps due in part to what has happened here in Louisiana with the outsourcing of freshman composition to high schools. There is yet to be a complete assessment of the enrollment drop, but the rise in focus on STEM education and on “bank-able” careers has certainly not helped. So far, the humanities in general and English departments in particular have not done well in making an argument for pursuing degrees, or at least significant coursework, in their disciplines. The argument that we offer critical thinking skills or advanced writing skills has not lessened the drop in enrollment. (It does not help that we largely leave “critical thinking” and “advanced writing” largely undefined such that neither faculty nor students understand what it is students will take away.)
Few students I have ever known, especially in a place like south Louisiana (where I myself grew up and went to university), entered college planning to major in English. Rather, we were lucky, usually post-composition, to take a general elective early in our college career where we were amazed at the opportunity to think about compelling, complex documents in ways that made us feel our own thinking made a contribution. This, I would argue, is significantly different than many of the lower-level STEM courses where the focus is often on entrainment. One immediate response to the enrollment crunch is to get those faculty who want to attract students into the field into the courses which fulfill general education requirements.
That’s an initial response, but it doesn’t address the larger question of what an English department does in general and what in particular a degree in English is good for? There seems to be a certain bankability in the human, especially the adolescent human, need to express oneself. Socially and culturally, college is often imagined as the last stage in the process of formalized individuation that seems to be one of the missions of educational institutions. It’s where students go to “discover who they are” and to “choose who they are going to be” or “what they are going to do.” (The American cultural machine has slowly ground down the higher ideals of universities to provide generalized intelligence and abilities to specialized ones — no wonder we fear AI; we fear something will take over now that we’ve crippled ourselves.)
Given this socio-cultural focus on individuation, creative writing has risen over the past few decades to have a significant place in English departments. But refining students’ abilities to write poems, short stories, novels, or plays doesn’t prepare them any more for the workplace than does studying or writing about them. Both engage in a lot of hand waving, like an illusionist getting us to pay attention to her left hand while she palms an object in her right hand.
To address this gap between a classical education, which is what an English major remains to a large degree, and a professional education, departments like the one I am in have introduced things like technical writing and/or professional writing, which we sadly promptly ghettoize, much like we do freshman composition and lower-level gen ed courses. We are literally out of touch: we are not in the classrooms where the majority of non-English majors are. That doesn’t have to be a bad thing in as much as there is no argument being made here that the English major should survive, though it is not clear that any department could survive purely as a service department to other disciplines. What I have seen is that it doesn’t take much for business schools to hire business writing faculty or their own business ethics faculty. The same is probably true for the sciences and social sciences.
So we come back to the question of what could an English degree be good for. (Hat tip to Wendell Berry for his inestimable What Are People For?) Like much of the rest of the humanities at all levels of university and advanced courses in the human sciences and sciences, it is good for thinking. Like the rest of the humanities, it’s good for thinking about complex topics that do not arrive with a manifest structure, unlike a chemical compound or economic formations. Rather, the humanities in general, and English in particular, seem to be at their best when they attempt to tackle complex objects whose underlying structure must be teased out. This is what I do when I tackle a collection of conspiracy theories in particular or the idea of conspiracy theories in general. It’s what my colleagues do when they examine a novel or a cluster of novels—or poems or plays or sentences (oh, linguists!).We examine writing in and through more writing. Other disciplines face similar conundrums or limitless halls of mirrors—look no further than mathematics—but they don’t do it with, in, and through writing.
And the ability to write, especially to write on complex subjects, remains a gold standard for advanced democracies. And that’s why English departments, and degrees, remain important to me.
To my mind, there are clear echoes of Mikhail Bakhtin’s chronotopes in text world theory (see Ernestine Lahey’s terrific encapsulation linked below), and so it was good to see that in the most recent issue of Signs and Society there was an article using chronotopes:
I am old enough to remember helping my father adjust the directions of the television antenna in order to achieve the clearest picture when watching broadcast television. I remember the large brown box with the giant dial on which he carefully marked the direction the antenna should be facing to receive a particular station’s signal. At the time, there were three broadcasters – CBS, NBC, and ABC – in the VHF spectrum and just one in the UHF, the nascent PBS. As some people who continue to receive broadcast television “over the air” know, reception was free. It was the advertising that paid for what you watched.
At some point in the seventies, most homes changed over to cable reception. While you paid for the service, in return you got a constant, clear signal and you also got additional channels – I think is when the idea of “channels” emerged? – which became known as “super stations” and later became the basis for broadcasters like TBS. (There was a station out of Chicago, but I don’t remember what it was called and I don’t know what became of it.) In the era of cable television, we both paid for television and still got served advertising. We understood that cable was a middle man, a broker if you will, and some of what we were paying for was not only reception but the additional channels. (This is when ESPN got its start?)
And then along came the internet revolution and a lot of us became “cable cutters,” with the idea that we would subscribe directly to broadcasters, who were now reborn as “content providers.” This new model promised an era of better television because the audience would pay for it and we would fewer “suits” mucking about on the content creation side. These new providers, which now included Netflix and Amazon, were going to act like studios, which are themselves a kind of broker, acting as a hinge point between actual content creators and their audiences.
It was a splendid moment. But the suits were not going to stay away for long, and soon they overbought and overbuilt and their “costs” went up and the subscriptions most of us had agreed to were no longer enough and, sigh, that ended in the return of advertising. So, like in the era of cable, we pay to have access to content which is additionally monetized through advertising. And this is true except for Apple TV+ and Disney+, which may be one reason to subscribe to one or the other. (A little pricey to subscribe to both.)
Each service offers a way to “opt out” of receiving ads by paying for a Plus or Premium version of the service, which in the case of PeacockTV, for example, is a 50% increase.
So what? That’s how business works. True. And what that means is that our little household now does what other households do: we sign up, watch what we know we want to watch (and purposeful watching rather than grazing is probably better for us) and then we cancel the subscription, perhaps changing to another subscription in the process.
One of the best responses to what AI brings to the table, amidst all the other hand waving or wringing, is Ted Underwood’s observation that we have an opportunity to use the geometries encoded within the models to map culture. Underwood uses a broad brush here, but implicit in his account, I think, is that the kinds of maps we are talking about are localized: perhaps to a given microculture (or speech community) or perhaps to a given genre.
Take for example the conspiracy theory as a genre: what if we took a base-line transformer, or perhaps a more fully kitted out model, and gave it a corpus of conspiracy theories to digest? Could we then lift up the hood of the transformer and see some of the workings for CTs for ourselves? I am particularly curious how such an effort might reveal patterns we had not noticed or even offered us a synopsis which at first glance doesn’t make sense. As Farrell et al note: “the probability distributions of long word sequences are … imperfect representation[s] of language but contain a surprisingly large amount of information about the patterns [they] summarize” (2025:1154). (Hat tip to Underwood for the reference!)
To be fair, Farrell et al are more interested in large scale analytical possibilities. They note, for example, that “such technologies could be used to find patterns in texts and images that crisscross the space of human knowledge and culture” (1155). As someone with more than a passing interest in recent developments in cultural evolution studies, the ability of LLMs to discern such patterns is very exciting, but I think starting small and focused is perhaps the best place to start. The opportunities for a productive tension between algorithmic results and conventional ways of understanding are terrific here.
Farrell, Henry et al. 2025. Large AI models are cultural and social technologies. Science 387(1153-1156). DOI:10.1126/science.adt9819.
If like me you read a lot of manuscripts in Word and then meet with manuscript authors to discuss their work and if like me you prefer to make compelling, constructive comments, then you are left flipping, or scrolling through a manuscript to remind yourself of your comments. To do that, you might want to be able to print those comments. Here’s how to do it:
While in the Word document for which you wish to see comments, go to print the document.
In the Print dialog box, go to the Microsoft Word option — you may have to click the > next to it to unfold it.
Click on the dropdown menu next to Print What: and select List of markup.
Click Print and it will print your comments as well as, sadly, all the changes you’ve made. I tend not to make a lot of editorial changes in a document but, rather, focus on higher-level matters in hopes that, as the topical and methodological kinks get worked out, so too will the grammatical ones.
There are a few of ways to customize an LLM to allow you to bring your own library of sources to bear on a problem. This could be anything from that collection of manuals for all your home appliances to scholarship on conspiracy theories. What matters is that you have the documents available in machine-readable form. For most people, this will be PDFs with text layers embedded either because they were produced that way or they were later OCRed. (Note to fellow scholars: early OCR was pretty ragged. I often re-OCR documents from JSTOR that are from its early days.)
From hardest to easiest, your customization options are: retraining an LLM, retrieval augments generation (RAG), and adding context documents. For most, the first option is not really one for a number of reasons. First, a lot of the LLMs, like ChatGPT, are proprietary. Second, retraining requires hardware that is expensive, and, third, it also requires the ability to work in Python. (Python is a very friendly language, but no matter how friendly it is, creating or customizing something as complex as a transformer is bit mind-boggling.)
A simple example might help: last year as part of an introduction to text analytics class, I thought I would see how do-able it would be for students to train a GPT on a small data set. I downloaded a collection of around a thousand jokes from Reddit and started the training process. Following the recommendation that I should have at least 30 epochs, I hit return and initiated the process: each epoch used all 8 of my M1 Macbook’s CPUs and took an hour. By the end of the second hour, the thing was getting so hot that I called it quits. So, yeah, you at least need dedicated hardware.
With retraining an LLM off the table, the easiest option is to provide context for your session. You can do this by uploading documents to a particular session or, with some LLMs like ChatGPT, you can create a custom GPT which references a collection of documents which you upload. (These can even be shared either to the public or via a link.) The advantage of providing context documents is that it is straightforward and relatively easy. But the information exists in the model’s information space, and so it works best with relatively small collections: the model goes through the entire context each time it answers a question.
The somewhat more involved move is to set up retrieval augmented generation. RAG allows an LLM to be more selective. It is better for larger, and more dynamic, data sets.
I spent a portion of today following Cody Wabiszewski’s guide on how to download and run the Dolphin Llama 3 model. His directions are quite good, but they miss a few things, especially when it comes to setting things up on a Mac. So for those Mac users who want expand the scope of running an LLM locally beyond Apple Intelligence, I suggest the following:
Head over to the Ollama website to download one of the available models. Wabiszewski recommends dolphin-llama3 and that’s what I went with.
You start with the Ollama.app. (I’m not crazy about this, but I wanted the friendliest possible way of doing this because I wanted to make this process reproducible for others … a notion that is completely contradicted by the next step …
Once the Ollama app is running, you can then use the terminal to run, in two different tabs!, ollama serve and ollama run dolphin-llama3. (The latter command downloads the dolphin-llama3 model, which is about 5GB. Complete information on the model is available on its Ollama web page.)
Now, there is a slight complication: when I downloaded the Dolphin Llama 3 model, it was not located where any of the posts I read suggested. Instead, I needed to use locate to find the directory. (If you have not run locate before, you will need to let it build its database, but that does not take long – less than 10 minutes on my M1 MacBook Air with 1TB storage.) Eventually I found it in a hidden directory in my home folder: ./ollama. (I would eventually like to move that to an unhidden directory dedicated to models, but I left it there for the time being.)
Download the AnythingLLM app, install it, and then run it. Create a workspace as directed, then click on the setting gear and then the Chat Settings. If ollama serve is running, you should be able to choose it as an option. There are several clickthroughs for this.
Once this is done, you are able to run an LLM locally. (BTW, this one has no guard rails, so be careful what you ask!)
Now that the spring semester is wrapping up, as well as the cold that hammered me in the last half of the last week of classes, I am turning my attention to the many, many, many uses of AI. I have not explored this space for a while, and while I knew that that there had been an explosion in the commercial space, I was not aware of the growth of the space of AI in support of research.
I am currently reviewing:
Sapien by AcademicID whose chief claim rests upon “Messages [being] backed by actual research, significantly reducing the risk of hallucination.”
SciSpace seems more focused on an interactive query structure where the AI asks you to narrow your focus more.
Finally, there is NotebookLM which has already been discussed quite a bit, but I am only just now catching up to that discussion.
I was asked to be a part of a faculty roundtable for this year’s Global Souths Conference, a conference hosted by graduate students in UL-Lafayette’s Department of English. This year’s effort was very well done, and I look forward to seeing what they do next year. Here are the questions they gave us in advance that might be discussed (in italics) along with my initial written responses, some of which I said during the roundtable and much of which I did not. (It was a lot of people and not a lot of time.)
How can scholars, educators, and practitioners work collaboratively to address the challenges facing Global Souths communities? What kinds of interdisciplinary approaches do you find promising?
The largest problem facing Global Souths communities is an economic world order where capital and, to some degree information, flow freely but people cannot. (In those places where immigration was policies were developed ostensibly to undo this problem, we have seen disastrous outcomes when nativism emerged and brought about things like Brexit, the closing of borders in Europe, and the election of the current U.S. president.)
I am not an economist, nor a sociologist, so I can make no contribution to the study of capital and people flows, but I can contribute to information flows. The work that excites me the most right now ranges from micro-scale studies of the intertwined emergence rise of colonialist and orientalist discourses, such as we heard about this morning from Yazdan Mahmoud to macro-scale studies that track conspiracy theories and how they transit social networks, localizing as they do so.
In the face of re-emergence of colonialism as we move away from what historian Sarah Paine has described as the maritime-mercantile order back to the imperial-zero-sum order, I fear that the role of the humanities scholar is primarily to document (and analyze) both the mechanisms of power as well as its machinations it works upon its victims. What we can hope for, I think, is those moments where we can protect or save someone from greater harm—but, make no mistake, harm will be done to many, if not all.
In what ways do you see the Global Souths as a conceptual framework rather than a geographic location, and how does this shape your scholarly or pedagogical approach?
The original goal of folklore studies, in conjunction with its cousin anthropology, has always been to find wisdom and beauty in the everyday existence of all the people who don’t normally make it into history. The field has made mistakes, often not crediting individuals who stood in for a larger group (a kind of hidden iconography) and because it often deals with actual people living actual lives in the current moment, it has done damage. It has also contributed real value to the academy and local communities.
I don’t know that I use Global Souths as a conceptual framework for scholarship and pedagogy so much as a way to survive institutionally: I’m a field research oriented folklorist in a department overwhelmingly focused on books and adjacent literary concerns. Majoritarian rule is real for me, and it impacts the evaluation of my work, my ability to win funding, and the seriousness, or lack thereof, with which my approach is taken. Intellectual classism is real.
What contemporary crises—whether political, economic, environmental, or cultural—do you see as defining the Global Souths today? How do these crises intersect with your own area of research?
See my response to the first question for the “nature of the problem.” As to how these things intersect, sometimes in a devastating way. When the 2005 hurricanes hit, I was halfway through a book on gumbo whose subtext was how Africanized so much of all Louisiana folk cultures are. Within a month, half the people I hoped to interview were uprooted, displaced, and relocated in ways that made them difficult to find again.
How can universities better support research and teaching on the Global Souths? What institutional challenges exist in foregrounding these perspectives in curricula? Colonialism has left a lasting mark on the Global Souths. How can scholars avoid colonial research practices in their work? How do colonial legacies continue to shape education?
We can stop doing things that replicate the very structures that reinforce colonialism and capitalism, like venerating celebrity academics and authors or engaging in intellectual classism. And, for goodness sake, stop imagining that the answer we know is either the only answer or the better answer. The whole point of folklore studies, of anthropology, and of decolonization in general was to argue that there is no one answer, that we all must live with partial answers because each of us is always already a partial self, and nothing capitalism or colonialism has to offer is going to fix that part of being human. (Well, to be clear, it works for the capitalist and the colonialist, but it depends upon a one-to-many ratio to work.)
With a number of Jewish holidays coming up, I thought it would be useful to look up other holidays which might also affect students ability to work. (In some cases they may be prohibited from working, and in others it may simply be that working is difficult.)
Someone recently asked about folklore podcasts. Following my folkloristic ideals, as well as allowing for the fact that I can only carry so many podcasts on my listening list, I outsourced the problem to fellow folklorists. (Some may assume that this was a function of laziness, but I can assure you that, yawn, what was the question?)
Here is the list the emerged:
Digital Folklore focuses on how thinking about digital culture through the lens of folklore studies forces us to expand the scope of both, culture and its study. It’s presented in a narrative format which can get a bit wild at times.
Folkwise shares the importance of folklore by drawing attention to the folklore happening around us every day through digital media.
The Fairy Tellers podcast explores what myths, legends, folklore, fables, and fairy tales have to say about cultures then and now.
The Appalachian Folklore Podcast describes itself a “wild hike through the history and migration of the folk culture, stories, traditions, and haints hidden in hills and hollers of Appalachia.”
Morbid Curiosity, a history program, got high marks from one of the respondents. It describes itself as being about everything from serial killers to ghosts, ancient remains, and obscure medical conditions.
If you know of other podcasts that should be listed here, please let me know so I can update it.
Back around 2008 I became interested in establishing an institutional repository for the university where I worked. By 2010, having spoken to a wide variety of stakeholders both within and without the university, I had a pretty good draft of what I thought was possible. It’s been passed around ever since, but we never got a repository.
The modern research and teaching university must emphasize that it both creates knowledge that it disseminates directly through diverse channels and that it teaches others how to create knowledge. An institutional repository is foundational in any effort to highlight the university’s centrality in knowledge creation. The institutional repository is where we collect the raw materials that drive research, publish analysis and results, and open the institutional doors to higher education’s many publics.
In the last year, a variety of developments have greatly increased the interest in establishing frameworks for securing data and making it available in a highly configurable fashion. As various units became aware that there was a common interest, it became clear that the best way forward was to arrive at a common set of requirements that would lead to a widely-available resource that would suit the greatest number of users.
I have over the past two decades teaching in university settings regularly been called upon to teach either a specialized research methods for folklore studies or one for all the domains represented in an English department, from the very science-oriented field of linguistics to the belletristic domains of creative writing (sigh) and some of literary studies. I also regularly teach undergraduate courses, and graduate courses, that feature a research project whose end goal is a research paper aka scholarly essay or article aka term paper.
While I myself do not find the rather regimented nature of science writing to be the kind of writing I wish to do in all my essays, I have found it useful to master as a process to getting good results. That is, if you do all these things, then you have some assurance that you have done competent work. (Its worth or merit or utility will have to be decided by the field into which it is cast, like bread upon a pond surface, or perhaps it’s a lake, depending on the size of the field.)
For my students coming here for the visualization of the structure, the diagram is below. The OPML file that goes with it is also available and can be opened in most modern word processors. (I know Word will open it.) Before anyone splutters, obviously how much of this structure you realize depends on the nature and scope of your project. This is an idealized structure. Consider it a guide and not a specification.
Here are the courses I am teaching this fall. The 432 is a regular feature now in our folklore course offerings and I have taught it with a focus on legends and information cascades for a few years now. The 370 is a new course, and I have to say I am looking forward to teaching a course with a non-folklore focus. I don’t get to do this often, and I am excited to see what students bring to the table.
ENGL 370. Interactive Fiction & Narrative Games. Branching narratives, interactive fiction, text adventures, CYOA all describe a form of entertainment—be it literary, performed in a group, or in a video game—in which a reader is given choices and their choices determine the nature and outcome of the story. This course explores the history of narrative games, from collaborative storytelling in oral cultures to the robust open-world games to cinematic universes in which multiple storylines exist (and sometimes interact). Course inputs include reading, viewing, and playing. Course outputs include analytical explorations of forms and mechanisms and the development of fictions of your own.
ENGL 432. American Folklore. The subtitle of this course is “Legends, Conspiracy Theories, Cryptids, Oh My!” This course seeks to explore the world in which all of us are already immersed, an online sea of information and misinformation. What are the impulses behind these flows, and what are their diverse functions. From the moment that humans became capable of re-presenting reality, we were engaged in various forms of fiction. Some forms are obviously meant for entertainment, like tales and jokes, and other forms are meant to inform and guide us, like myths and histories. In-between are the stories we tell, the information we pass along, and the arguments we make in which we conjecture about the nature of reality. Individuals interested in this course should be aware that there is as much darkness as light in what we consider and should be prepared to handle topics objectively.
This semester I am teaching a class on Project Management in Humanities Scholarship. I have seen enough graduate students stumble when shifting from the managed research environment of course papers to the unmanaged research environment of the thesis or dissertation, that I thought it would be useful to try out some of the things we know about how best to manage projects in general as well as offer what I have learned along the way. The admixture of experts agree this works and this works for me I hope opens up a space in which participants can find themselves with a menu of options from which they feel free to choose and try. Keep doing what works. Stop doing what doesn’t.
We are a month into our journey together and almost everyone has finally acceded to the course’s manta of doing something is better than doing nothing (because the feeling of having gotten anything done can be harnessed to build momentum to get something more important done), but there are a few participants who are still frozen at the entry door to the workshop where each of us, artisan-like, is banging on something or other.
All of them have interesting ideas, but some are struggling with focus. I think this is where the social sciences enjoy an advantage. They have an entire discourse, which is thus woven into their courses and their everyday work lives, focused on having a research question. What that conventionally means is that you start with a theory (or model) of how something works; you develop a hypothesis about how that theory applies to your data (or some data you have yet to collect because science); and then you get your results in which your hypothesis was accurate to a greater or lesser degree.
Two things here: the sciences have the null hypothesis, which means they are (at least theoretically) open to failure.1 The sciences also have degrees of accuracy. Wouldn’t it be nice if we could say things like “this largely explains that” or “this offers a limited explanation of that” in the humanities? Humanities scholars would feel less stuck because they would be less anxious about “getting it right.” We all deserve the right to be wrong, to fail, and we also deserve the right to be sorta right and/or mostly wrong. Science and scholarship are meant to be collaborative frameworks in which each of us nudges understanding just that wee bit further. (We’re all comfortable with the idea that human understanding of, well anything, will never be complete, right? The fun part is the not knowing part.)
The null hypothesis works very clearly when you are working within a deductive framework but it is less clear when you are working in an inductive fashion. Inductive research usually involves you starting with some data that you find interesting, perhaps in ways that you can’t articulate and your “research question” really amounts to “why do I find this interesting?” Which you then have to translate/transform into “why should someone else find this interesting?” Henry Glassie once explained this as the difference between having a theory and needing to data to prove it, refine it, extend it and having data and needing to explain it.
There is also a middle ground which might be called the iterative method, wherein you cycle between a theory or model, collecting data, and analyzing that data. Each moment in the cycle helps to refine the others: spending time with the data gives you insight into its patterns (behaviors, trends) which leads you to look into research that explores those patterns, trends, behaviors. Those theories or models then let you see new patterns in your texts that you had not seen before, or, perhaps, make you realize that, given your interest in this pattern, maybe you need different texts (data) to explore that idea.
I see a lot of scholars, junior and senior, stuck in the middle of this iterative method without realizing it and don’t know which moment to engage first. What should they read … first? (I have seen the panic in their faces.) What I tell participants in this workshop is that it doesn’t matter. They can start anywhere, but, and this is important, start. No one cares whether you start reading a novel (and taking notes) or reading an essay in PMLA (and taking notes). 99% of managing a project as an independent researcher is just doing something and not letting yourself feel like you don’t know where to start. Just start.
Will it be the out come be the project they initially imagined? Probably not. But let’s be honest, that perfect project they initially imagined lived entirely in their heads—as it does for all of us. It was untroubled by anything like work. (That’s what makes it ideal!) It was not complicated by having to determine where we might publish the outcome, who might be interested, to what domain you might contribute. It was also unavailable to anyone else, inaccessible to anyone else, and probably incomprehensible to anyone else. As messy and subpar as the things we do in the hours we have are, in comparison to that initial dream, they are at least accessible to others, who will probably find them interesting and/or useful.
To be clear, I usually press workshop participants and students to start with data collection / compilation (and not with a theory). Mostly that’s because I am a folklorist (and some time data scientist) and I feel at my most driven when a real work phenomena demands that I understand it. To a lesser extent, as comfortable as I am with my own theoretical background, I find the current explosion in all kinds of theories a bit overwhelming. I prefer to let the data tell me what data I need to go learn, else I might end up going down the rabbit hole of great explanations and never get anything done!
The sciences are currently undergoing a pretty severe re-consideration of the “right to be wrong.” With the cuts in funding to so many universities — because, hey, the boomers got their almost free ride and shouldn’t have to pay for you — the American academy has shrunk, creating greater competition for the jobs that remain, which has meant that scientists often feel like they can’t fail. Failure must be an option when it comes to science, and scholarship. When it isn’t, we end up with data that has been, perhaps purposefully or perhaps unconsciously, miscontrued because the results need to be X. ↩
A recent comment I made on the current state of education in the humanities on LinkedIn drew a fair amount of attention. I’m not linking to that comment here as it was of a moment, but there are some things I have observed based both on being a parent of a particular kind of thinker as well as documenting similar kinds of thinkers out in the world. I call them world builders here, but they might also be called immersive thinkers.
Origins
In the car one morning on the way to her school I commented to my daughter that the rain had made driving a bit more difficult than usual and that I would have to make sure to keep two hands on the wheel. It was, for me in that moment, simply a metonym for paying attention, and, I confess, a way of letting my daughter know that her dad may not be paying as close attention to our conversation as we both often enjoyed. Over the years of a morning commute that got her to school and me to work, we had enjoyed a wide variety of conversations, which sometimes ran sufficiently wild, especially at her end, that I had to remind her, as a way of reminding myself, that driving was the higher priority.
A little too often my reminders came out more as a chides, which I always regretted. As was often (thankfully) the case, my daughter performed some conversational judo on it by responding, “What if you had three hands?” Her first thought was that I could drive and wave to drivers nearby, but quickly she spun the idea out into a variety of possibilities before settling down into playing a variety of instruments with three hands: there was a three-handed piano piece, then a three-handed guitar melody, and then a three-handed trumpet call. The sounds grew wilder, weirder and her laughter built from giggles to squeals.
Her first move displayed the power of divergent thinking, something which has been explored quite a bit over the past few decades in creativity studies, but her next move was to dwell in a particular domain, to immerse herself in a world, and to play with the possibilities there. For the time being, I would like to call that immersive thinking. It is surely related to that kind of thinking that we sometimes call rich mode or right brain thinking in a way that I want to spend more time thinking about — and to which I am open to suggestions![*]
World-building was, and is, like a reflex action for my daughter. From the time she could speak, she spun out stories. She usually enacted the stories, dramatizing them with props and costuming if she was a character or animating a wide variety of objects, some of them more obviously meant for such use and others not. I can’t, for example, count the number of times objects at restaurant tables came to life and led complex social lives when adult conversation became uninteresting to her. My wife and I saw utensils be sisters, salt and pepper shakers be parents, and a tented napkin become a home.
It was, and is, an amazing thing to watch, but as many creative individuals know, such an ability does not come without its penalties. While her school labeled her a “deep creative,” it seemed largely a way of admitting they were unable to come up with a plan on how to make a space within which she could learn and grow to suit her own abilities and interests. Don’t get me wrong: she did well (enough) in school, but that’s largely because we worked hard at home for her to adapt to the regimen at school. And so she got high marks, but those marks were also regularly accompanied by comments from, well-meaning and really nice, teachers that she “did not pay attention” as well as she should, that she was “daydreamy” or that “sometimes she just phones it in.”
One could perhaps fault the teachers, but I rarely find individuals are the problem in these circumstances. More often a system is at work. In this case, I think it’s fair to blame a larger educational ideology that has come to rely upon standardized tests as one of its central metrics. In a moment that resembles the classical economics parables about unintended consequences, what we so many of us face, as parents in the paroxysms of our children or ourselves, is an entire educational system which many believe is headed precisely in the wrong direction for what looks like reasonable, well, reasons.
Indeed, an entire cluster of industries have arisen around the wobbling of the educational infrastructure in our country. The technorati favor two flavors that are not necessarily mutually exclusive. The first flavor is that articulated by Ken Robinson who argues that our schools are stuck in the industrial age, anxiously trying to turn out uniform widgets in a moment where standardization couldn’t be less useful – the assumption being that things are changing more quickly and more predictably than ever. I don’t subscribe fully to this latter notion, but it’s not hard to see that the current context for businesses favors only a few large incumbents with stability, but employment with those incumbents, as two decades of layoffs and jobs moving from one part of the world to another have provied, is not stable. In other words, institutions have stability, but only individuals at the top of those institutions get to enjoy the fruits of that stability.
Outside of those narrow mountaintop retreats, there’s a whole host of changes taking place as industries transform in the face of an amazing amount of computing power. My own industry, higher education, is facing such a transition, but think about even the way manufacturing is changing as building components becomes less about removing metal by mill and lathe work or stamping and cutting but more about “printing” them by building up a part molecule by molecule. Suddenly, economies of scale matter less and sheer imagination matters more. (Well, you’ll still need quite a bit of capital to have such a “printer” at your disposal, but that’s a return to a history we have seen already – i.e., the original printing press!)
What to do with our little geek, our world builder?
Here’s the short of it: our daughter was a geek. She had all the classic geek traits: she prefered to be fully immersed in a problem or project or world and she oscillated between wanting external affirmation for her accomplishments and not caring what others think. Most geeks I know are like this. Many of them truly believe they don’t need anyone’s approval, and for a few of them that may very well be true. I also know, speaking as a geek (I think) myself, that, yes, sometimes a nod from someone you respect is not only all you need, but it is something you really want.
A lot of curricula which have high geek probabilities have switched to more project-oriented pedagogies. We are seeing more of it engineering, and it has always been a prominent part of architecture. But what to do with our geeks, our world builders in other domains? How do we re-rig systems at least to allow them to think the way they think?
An example from her experience:
For a time, our daughter was in the school choir. Every year the choir put on a musical. One year it was Charlie and the Chocolate Factory; another it was The Wizard of Oz. Every year students auditioned for a role in the play. Now, how do you suppose those auditions took place? Did it come after a watching the film version or reading all or parts of the book? Did it come after listening to some of the story’s most famous passages and songs? That is, did it allow an immersive thinker an opportunity to do what they do best, get inside a world and look around, elaborate it, play with it? No, the auditions were songs from some place else, handed out the week or so before the auditions. Students were told to practice the songs, do their best, and decisions would get made.
Now, that approach works if a student is procedurally-driven and understands the necessity, or already desires, adult approval. It doesn’t work at all for the student that needs to live and breathe inside a thing, to get a sense of it, to find their excitement there.
Fundamentally, this comes down to the difference between teachers as the center of a curriculum and students at the center. As a teacher myself, I know I can’t be all things to all students, and in a post to follow, I want to think more about how education might be made better for more kinds of learners than it currently is. In fact, I worry about one recent trend in particular: the rise of the master teacher and what that means for learning differences — here, learning differences are meant much more broadly than they are in the education industry.
In Fall 2023 I led a course on digital storytelling. In preparing for the course, I wanted to see what others were doing, and so I searched for course listings, tracked down syllabi, and compared assignments and foci. It was fascinating to see the range of things being done. One thing that I did not fully expect was how often a search for “digital storytelling” washed me up on data science beaches. The graphic below tells the story, but I also want to collect more links to see what I can learn. (See the list below.)
Some time last year, comments were requested on the matter of AI and copyright. I submitted the following.
I am writing for myself, but as a folklorist I am also writing with profound respect, and sadness, for our national tradition of enabling private profit at the cost of the public commonwealth. Like the pharmaceutical industry raiding traditions around the world in order to develop better, perhaps life-changing, medicines, we have allowed the large language models behind most of the more prominent AI platforms to harvest knowledge of a lot of individuals without the individuals themselves receiving any acknowledgment, compensation, or share in the profit. Whether we call it “folk” or “mass,” we dis-enfranchise those who actually produce the materials from which we derive products.
We cannot fall back on user agreements which, in order for the basics of the web to work, had individuals consent to broad grants of copyright. We must acknowledge that most users posted texts, images, and other media assets to various platforms and sites in the interest of creating and maintaining various communities. That they were willing to be sold to advertizers, because that is the basis for American media production, should not in any way affect our consideration that their materials, and thus the people themselves to some degree, can simply be given to AI platforms. At least the social media platforms gave them something of value in exchange. AI platforms are already monetized, seeking rent for creating an abstraction of a city built of neighborhoods built by others.
We cannot know what will be the eventual outcome of the development of these AI platforms, and I don’t think referencing the hype or the fear-mongering does any good here. What we can know is that a system’s integrity must be clear and checked throughout the process. Right now, we can say for certain that these systems were built without integrity when it comes to their data acquisition. If we do not figure this out, if we do not create useful guidelines for clarity and integrity, than we are somewhat dooming these systems to have further negative impacts.
I’m preparing to teach text analytics, the first time such a course has been offered at my university. I came across this great moment in John Scalzi’s Redshirts where statistical analysis is mentioned, but I can’t find a way to include it in the syllabus:
“So what you’re saying is all this is impossible,” Dahl said.
Jenkins shook his head. “Nothing’s impossible,” he said. “But some things are pretty damned unlikely. This is one of them.”
“How unlikely?” Dahl asked.
“In all my research there’s only one spaceship I’ve found that has even remotely the same sort of statistical patterns for away missions,” Jenkins said. He rummaged through the graphic elements again, and then threw one onto the screen. They all stared at it.
Duvall frowned. “I don’t recognize this ship,” she said. “And I thought I knew every type of ship we had. Is this a Dub U ship?”
“Not exactly,” Jenkins said. “It’s from the United Federation of Planets.” Duvall blinked and focused her attention back at Jenkins. “Who are they?” she asked.
“They don’t exist,” Jenkins said, and pointed back at the ship. “And neither does this. This is the starship Enterprise. It’s fictional. It was on a science fictional drama series. And so are we.”
In a response to a video by Parker Settecase on the utility of boredom and of capitalizing on it by using a notebook, theorangecatmom noted:
I’m not that smart, but my Dad taught me to use my brain to entertain myself when bored pre-cellphones. I still make myself practice it when I’m in a waiting room or sometimes on my breaks at work. As a kid, he taught me to count things, find patterns in the stuff around me, play mental math games, stuff like that. As an adult, when I’m surrounded by people, I sometimes just listen or watch what they’re doing and think about why they might be doing it. I call it practicing being bored and it blows people’s minds that I do it on purpose.
I was interviewed by Perry Carpenter and Mason Amadeus of the Digital Folklore podcast and it was a blast! We had a wide-ranging conversation and they ended up pairing me with Lev Gorelov for their episode on “Statistically Conscious (Artificial Intelligence)”.
K.M. Kinnaird, Allison Chaney, and I have submitted our latest work with/on TED talks to the Journal of Cultural Analytics. For those interested and wanting to know more about what we have been up to for all these many months, here’s the introduction:
Mappings of texts to assigned or assumed genders qua gendered have been a part of studies of linguistic expressivity since Robin Lakoff speculated about the differences between men and women’s ways of speaking (Lakoff 1975). Like others, we find such explorations of differences in expression between men and women compelling, whether it is focused on modal meaning — expressions of a speaker’s certainty, or uncertainty found in tentative language like hedges, tag questions, intensifiers — or in affective meaning — expressions of a speaker’s attitude toward his/her audience which can be mapped to group composition, the relationship among participants, and their status.1 Few studies have been as clear-cut in their findings as Robin Lakoff’s initial speculations suggested, but the larger field of inquiry she engaged has enriched a variety of philological domains both in terms of gender but also in terms of power dynamics across and within groups. This inquiry has run either parallel to or resulted in far more nuanced appreciations of the ways that texts occur, the situations in which they occur, and how the texts themselves are either shaped by an event or shape the event in some fashion. As Patricia Sawin notes in her consideration of gender and power in situations where texts feature: “esthetic performance cannot be bracketed from social action,” and not only that but we must realize that “emergent performance can transform cultural models or social structures, [and] that esthetic performance is a central arena in which gender identities and differential social power based on gender are engaged” (Sawin 2002:48). That is, gender is in many instances as much a product of discourse as it is the producer of discourse.
Fascinated by the intertwined nature of gender and discourse, we wanted to explore what contri- bution the TED talks data set (Kinnaird and Laudun 2019), previously released through this journal, could provide to understandings of language use and gender. There is always, of course, the matter of what one is called, or how one is imagined by others, and then what one calls oneself, if anything at all, and/or how that self projects itself into the world in and through language. Here, we attempt to apply two lenses to explore the role of gender within discourse as manifested in TED talks. First, we investigate the representation of speakers by gender. Second, we explore how others (by gender) are being spoken of. Put another way, we first concern ourselves with who is speaking and second how they are speaking of others and of themselves. These twinned investigations represent two (of many) di- mensions of cultural analytics: established topics of concern to the humanities and the human sciences and possible applications of machine learning to those topics, inviting either novel answers to familiar questions or new kinds of questions. Here we use techniques from supervised machine learning and statistical inference to explore the gender of the speakers. Having done that, we explore the gendering of agency in hopes of establishing a continuum across which speakers project themselves.
In the essay that follows, we begin by surveying recent work in cultural analytics that has explored the gendering of character spaces (Underwood, Bamman, and Lee 2018 and Jockers and Kiriloff 2016). Grounding our own consideration in semiotics and seizing upon the opportunity of working with spoken material, we examine how character spaces intersect with speaking subjects and subjects spoken. With our theoretical program laid out, we proceed with the difficult, and tendentious, matter of gendering the speakers of our texts, making it possible to divide the TED talks corpus into two subcorpora, one by women speakers and one by men speakers. In the third section of the essay we explore what actions are available to gendered subjects as well as how, using those subjects, we can gender verbs in such a way as to explore the continuum of gender as presented in the Is of speakers.
For this present purpose, only texts from TED main events—i.e. not TEDx or other special events—are examined. First, we extend the speaker dataset within the TED talk data to include the gender of each speaker (as can be detected from public materials). We then use these identified genders to “gender” each TED talk. We then use these gendered texts to explore agency in the TED talks and how they differ based on both the speaker’s presented gender and the gender of represented subjects within the texts. In doing so, we seek to demonstrate the value in considering gender in an exploration of a text corpora as well as to connect computational text analysis to the broader conversation about TED talks. While the TED talk corpus is small, and, as we will explore more later, unbalanced, it does offer us an opportunity to examine a well-established, and popular, discourse stream that could offer us insights into gendered subject positions in English.
By mapping out the actions available to gendered agents, principally he and she, we hope to establish a continuum within which speakers may or may not engender themselves (as the I in their speech). We are not particularly happy with the fact that the work only reflects binary gender, but the fact remains that for much of the discourse of the last few decades in which TED talks have occurred, which are themselves affected by the decades (and centuries) which preceded them, the English language has largely offered two genders. Individuals operating within such a discursive regime are largely left to their own linguistic devices to represent themselves as well as to represent others. If we can establish some baselines, we might also be able to discover interesting experiments and innovations occurring in texts in a myriad of places.
Finally, we concede that our analysis is limited in a few ways; most notably that this work only focuses on gender (as currently presented) and not other axes of diversity, including race, income status, sexual orientation, and religious background/identity. Gender was easier to study, in that we could detect a speaker’s outward presentation of gender using computational techniques on the written documentation that was available on the TED website, relying on gendered pronouns about speakers in the documentation as a signal for gender. Examining other axes of diversity would require additional data (either self-reported surveys or additional press materials with explicit references to diversity markers) on each speaker or slightly different data scraping techniques with more explicit ‘rules’ for detecting diversity (such as euphemisms or cultural synonyms). This is not to say that other axes of diversity should not be studied, but rather to explain why our analysis is not immediately transferable to other markers of diversity. Instead we see our work as a first step, one that can provide a framework for extending these kinds of questions to more areas of diversity.
Humans can’t feel wetness. There are some insects (and maybe other animals?) that can because they possess hygroreceptors. Humans do not. Our brain translates differences in temperature and pressure and converts that into “sweat rolling down” our necks. What’s fascinating about this is that our sense of temperature itself is a product of how fast the heat is being transported from our bodies.
Our bodies constantly produce heat, which means we constantly need to dump heat. If this process moves more slowly than our bodies prefer, we feel warm. If it moves far too quickly, like on cold days when we feel like the heat is “getting sucked out” of us, we feel cold.
The rate of heat transference depends upon the medium: wood feels warmer on our feet because it is slow to transfer heat. Air is even slower to transfer heat, which is why so many insulating materials—down in coats and fiber glass and rock wool batts in the walls of houses—are basically media that trap air. Ceramic transfers heat which is why ceramic tiles feel cold on your feet. (It’s also why it’s just as effective when placed above under-floor radiant heating: it’s the radiant part of the equation.) For those remembering grade school science experiments: yes, the ice cubes do melt more quickly on tile than wood floors.
Different materials have different thermal properties, so heat transfer goes at different rates depending on the material. Most of our senses are based on change in general and sometimes rates of change in particular. Our three-dimensional vision is a product of our two eyes sending different signals to our brain and our brain compositing those signals. We “get the chills” when we have a fever because our temperature is lowering more quickly than our body prefers, and we feel like we are “burning up” with fever not when the temperature is stable but as the temperature increases.
It’s also why we can see in a wide range of light levels and hear in a wide range of volume levels: our brains are very good at detecting change, and, in the case of some folk illusions, like pressing our arms to door frames and then having them feel like they are listing on their own, we can hack our brains preference for detecting differences and change for fun. It is also, sadly, how information merchants (marketers, information operators, among others) hack our brains to keep us engaged.
I want to think more about sensibility both in terms of processing information but also how it affects our relationship(s) to stories in the months ahead.
One of the reasons many of us make things is to understand them, to understand not only how the parts and pieces fit together and how they accomplish what they accomplish once assembled but also the working principles behind the assembly. Public.Resources.org offers a terrific library of old films, mostly from the Department of Defense, that explain a wide variety of processes and principles. The list is impressive and includes things like: basic amplifiers, DC motors, AC motors, resistance, and capacitors just within the electrical category.
While the example technologies are dated, the visualizations are, to my eye, just as good as anything we could offer today. Perhaps they are better precisely because the limitations of the era’s media forced the presenters to focus on what matters most and not simply what could be included. (Because we can is a principle familiar to many of us, but I bet many of us would also admit that sometimes that way leads to distraction or, at least, digression.)
Their home page lists a bunch of resources but if you want, you can also go straight to their Youtube home page.
I was interview by Perry Carpenter and Mason Amadeus of the Digital Folklore podcast and it was a blast! We had a wide-ranging conversation and they ended up pairing me with Lev Gorelov for their episode on “Statistically Conscious (Artificial Intelligence)”.
In one of those necessary moments of trying to be more programmatic — that is, trying to remind ourselves why we do what we do in order to prompt ourselves to double-check and/or refine what we do when we teach — the folklore faculty were asked to come up with outcomes for the undergraduate concentration in folklore studies. I think we did reasonably well at the abstract level:
An understanding that culture is a dynamic process that is the result of anything from a small group to transnational networks of individuals each of whom acts as a receiver and transmitter of information that both shapes how they see the world and is in turn shaped by the world as they experience it
Any understanding of culture must be founded on a commitment to openness that may itself be impossible to achieve but is a worthwhile goal in itself.
That understandings are to be communicated in appropriate forms to the context, and that all understandings (findings) are momentary for the investigator and for the larger scholarly and/or scientific communities and/or publics in which they find themselves. Science is an ongoing dialogue with itself about the nature of reality and the nature of humanity within that reality.
Now we need to follow through at the level of the concrete: assignments we build into our courses, how we are going to evaluate those assignments, and how we are going to assess our own efforts with regards to these matters.
If you are interested in your writing having an audience, if you are interested in dialogue and not monologue, if you want to have some effect in/on the world, then you have to go where the people are and you have to find some way into the marketplace of ideas. The blogging landscape has shifted considerably in the almost 20 years I have been doing some version of it. Right now, personal blogs sitting on boutique infrastructures just are not quite seeing the same kind of circulation they once did. (To be clear, I never broke about 200 reads in any single day, or if I did it was only momentary.)
Right now, Medium seems to have found a decent niche for content creators. You can find me at Medium.com/@johnlaudun.
I am not entirely satisfied with Medium. For one, its only editing interface is the website itself: you can work off-line, but once your draft is on Medium and you edit there, you can’t easily get the draft back onto your local machine, barring copying and pasting. That also means that there is no way to download an archive of your work. Just as importantly, you can’t upload a back catalog of your work: Medium assumes anything you publish will be published now.
I have quite a back catalog of posts from my old WordPress site, and I guess I will work on getting them uploaded here, which will suffice as some kind of archive — and I can’t imagine there is much of an audience for that old stuff anyway. Still, it would be nice if some of these processes were easier…
My thanks again to the organizers of this year’s Louisiana Aging Network Association conference for the invitation to spend time with all of you. For those interested in materials from my talk, “Listening for the Past: Learning to Listen for How History Actually Gets Told” the links are below:
The talk itself is now available on Medium. The texts are block quotes, and the takeaways are embedded in the text as slide images.
The slides are available for download from my portfolio.
Many thanks again for letting me spend time with you and for the amazing questions and conversations that followed.
My first serious research project as a folklorist was an attempt to understand the constructed spaces of Urban Appalachians. I received a lot of nice attention for it, and Erika Brady, who was then editing Southern Folklore, shepherded it steadily through to publication. For my dissertation, however, I focused on a kind of sociolinguistic study of oral histories. My original plan had been to return to Louisiana to interview black and white men and women who had worked in the sugar cane industry. I was curious to see who remembered what and how. I couldn’t, at the time, afford the project, and I knew of no funding agency interested in such work.
I took the basic idea, however, and re-focused my efforts on a set of speech communities in Bloomington, Indiana. I had, thanks to a friend, come across a murder that had taken place in 1946 that had galvanized the town and thus had become a kind of legend among middle-aged speakers and, as I discovered, a reference point for older speakers. The project was interesting to me for a number of reasons:
it took place among members of the Bloomington community who had no direct connection to the university itself, which as anyone who has visited the area knows, dominates the town in the present;
the speech communities reflected by the individuals I found were not unified, but they were related in ways that I could objectively describe;
the two speech communities, one black and one white, had been themselves cohesive in the past by all accounts, though they were in the present fragmented with the passing of various members;
While the murder was my starting point, I never really got around to a fine-grained analysis of the different accounts. For one, my ability as a white male researcher to access the full range of accounts was stymied by social stigmas experienced long ago but still painful in the present. That is, the older black men who would speak with me did not want to talk about the incident, pointing, indirectly through the other stories they told, to the racism that frightened them during the period. Older black women felt more free, if only because in some sense the murder revealed sexual peccadilloes in the white community. Those peccadilloes constrained older white women in their discussion and were transformed into a rape scene by older white men.
Fascinating stuff, and I will write about it one day, but what grabbed my attention in the moment was the range of materials that I gathered that were not in the expected discursive mode.1 That is, all the literature about oral history and life stories focused on narrative, usually grand narratives of the kind one only encounters from avowed, and revered, tellers in a community.
And yet, and yet, as I listened to hours upon hours of tape on my Sony Walkman D3 (now fondly remembered), I found myself with a diverse collection of relatively small narratives and a whole lot of material which was decidedly not narrative in nature. Often it was what I came to call oral exposition, though I never really offered a thorough definition of that term. That is, I had a whole lot of discourse that, if it seemed narrative in nature, it was because the person was walking me through a neighborhood or landscape that had once existed and they used the walk as a way to tour a lost world. It is a highly effective technique, of course (and one I encourage interviewers to use as a kind of memory prompt) but the narratives that are produced are not really stories so much as geographies: “This was here. That was there.” in the form of “And then, if you kept on going down Third Street you’d get to old man McCullough’s store.”
The kind of analysis resulted in very discrete texts that were, I now understand, marked up:
We can, however, make some distinctions between durations, sorting out lines and clauses by how long a situation lasted, allowing us, as Jean Ellis Robertson notes to determine “what durations events were that people recall in chronological order or else not recall as a chronology” (1983:47). Such a scheme might also reveal whether people recall events of a fairly short duration or enduring situations. Robertson suggests the following time-scale index:
Table 3.1: Time-Scale (D=duration)
D0 - Period of time lasting up to a minute
D1 - Period of time greater than a minute and up to an hour
D2 - Period of time greater than an hour and up to a day
D3 - Period of time greater than a day and up to a week
D4 - Period of time greater than a week and up to a month
D5 - Period of time greater than a month and up to a year
D6 - Period of time greater than a year and up to a decade
D7 - Period of time greater than a decade
If we return to assign the following values to each of the lines: our “first example where Hugh Goble describes his work as an excavator, we can
(D7) He was in the excavating business,
(D2) so he called me to come up and showed me the job.
(D6) And we dug house basements.
(D6) And that was when they were remodeling a lot filling stations,
(D6) making them super service and that sort of thing,
(D0) so I said, yeah, I’ll take it.
(D6) So I worked there about two years and a half.
(D2) And then we came back to Bloomington.
(D6) At that time, my brother-in-law—
he’s passed away—
but at that time he owned a furniture store, United furniture.
At the time, I had no idea that people were writing about things like TimeML. I knew that some portion of sociolinguistics was interested in this kind of discourse analysis, but everything with which I was familiar was much more oriented toward making generalizations about particular kinds of performances or about particular kinds of groups or about particular kinds of performances within particular kinds of groups. But I was, and am, interested in something more like structuralism: how these texts are part of a generative model that might possibly be located in the human mind.
It’s taken me a decade, D6 above, to discover geeks like me who delight in this kind of thing. (Again, my thanks to Tim Tangherlini.)
Of course, I have thoroughly enjoyed in geeking out with the guys who make the crawfish boats and all sorts of other machines and tools that do real work like food come out of the dirt, but I have also observed that the academic audience for such work is very small and the interstice is an awkward one.
The number of folklorists who are interested in material culture studies has always been something you could count on two hands – at peak moments of when you also counted graduate students, you might need to take your sock and shoes off. It has been great to discover the history of technology, and I was delighted by the reception I encountered when I attended the annual meeting of the Society for the History of Technology, but there the interest in things agricultural and also regional meant a lack of sexiness that I had already felt at the annual meetings of the American Folklore Society. I could carp about this all I want, but it’s not going to change anything. Folklore studies and anthropology have continued to drift apart, which means that academic folklorists in AFS are increasingly housed in the departments of literature and/or languages. They are going to be more interested in texts than in things.
More importantly, the number of folklore jobs will always be limited and occasional. The number of jobs for people doing textual studies is larger, and the number of jobs for those interested in doing so “digitally” is enjoying some prominence now. As the saying goes – and I’m a folklorist so I gotta go with the proverb: carpe diem.
So I love the boats, and I love the guys who make them. And I hope the door that it opens onto human creativity is as interesting to readers of the book as it has been to me to write, but it will be the last of that work for a while for me. As my editor generously offered when I last saw him at the annual meeting of AFS in New Orleans: material culture is hard to write about and do it well. It’s been incredibly difficult to balance spending time with my family, attending to the non-stop parade of inanities my university produces, and to maintain a steady stream of fieldwork experiences that I then transform into useful texts that I can then review and fold into a scholarly script.
I’m not saying text studies are easier, but I am ready to spend some time with data that is already at hand and let’s me focus on analytical models and possible connections with colleagues near and far.
I prefer discursive mode to David Herman’s text types. Mode, I think, reflects the generative nature of discourse itself: clauses get strung together, and hang together qua texts, through different structuring principles: narrative, locative, descriptive, etc. I’ll have more to say about this in a book which is tentatively titled How Stories Work. ↩
The MLA and CCCC have formed a commission, of some sort, to develop a policy, of some sort, with regards to ChatGPT. They are asking people to take a survey. I did, and I decided to save my answers so that I could think about them some more.
Has your institution, department, or other unit proposed or developed policies about ChatGPT or other AI text generation technologies? If yes, please describe it.
Like a lot of people, I wasn’t fully prepared for ChatGPT et al to emerge quite so quickly as a force. Luckily, for me, I had already planned a semester in which active learning in the classroom would take place. To that I simply added more in-class writing assignments, essentially dodging the issue of ChatGPT for the time being. In my discussion with my students, who were more clueless than some of the folk running around proclaiming the sky was falling, I simply noted that I expected them to turn in work that was theirs alone. But I should note that I had not yet fully experimented with the possibilities yet. Havign done so, I still remain fairly unconcerned: I was underwhelmed by ChatGPT’s responses to my prompts. What ChatGPT offered were the kind of competently to well-written non-answers produced students who have been well groomed but ill educated.
Have you developed any classroom policies about students’ use of ChatGPT or other AI text generation technologies? If yes, please describe it.
I am working on a digital storytelling class for Fall 2023, and I think I am going to build at least one ChatGPT experiment into the class. Students already know the technology is out there: how can they use it for good? And by that perhaps there is also room for “their own good,” given the inequities they face. Honestly, if they wanna highjack instutitional systems that already imagine them as cogs to be churned out for a labor market, who am I to blow against the wind?
What concerns, if any, do you have about use of ChatGPT and other AI text generation technologies in teaching?
My chief concern is that it will only devalue writing further, eroding the already weakened position of the humanities. Why write when you can have a bot of some kind do it for you? I think we’re going to see a host of weak scholarship and science get published in a lot of places – we’re all thinking lower-tier journals but we may also find ourselves surprised. Given that universities have farmed out tenure and promotion to journals and presses, why wouldn’t people do it?
I honestly don’t know that I won’t encounter such an article, or book, and be able to tell. But, again honestly, there is so much formulaic writing (and analysis) already out there getting shoved through various publication pipelines, that it might as well be bots. That is, I would argue that we have already boticized ourselves. (That’s a terrible verb, but you get my meaning.)
NucleCast Episode on Narrative(Click to embiggen.)
I think we can all agree that any story that helps people in general and policy makers in particular more aware of the importance of nuclear deterrence is important. It was a great pleasure then to be interviewed by Adam Lowther, host of the NucleCast podcast, about how stories work, and how scientists, scholars, engineers, and others involved in nuclear deterrence can get better at tellings stories that matter to us all.
Your research items reached 1,000 reads (Click to embiggen.)
ResearchGate recently notified me that I had reached a milestone: my research items had reached 1,000 reads. Ignoring for a moment the awkwardness of “research items,” which we can, I think, chalk up to ResearchGate making it possible to publish a variety of materials, I want to think about what “reads” means here, because in the age of citometrics et alum these kinds of quantifications may eventually play more of a role than any of us might wish.
As a member of a department personnel committee, I recently enjoyed reviewing the work of three terrific junior faculty members, all of whom I will note here deserve to be tenured and promoted without delay. All three offered not only impressive vitas and compelling portfolios of materials for personnel committee members to peruse, but also had very polished slide decks, each of which featured various composite scores of semesterly SEIs (Student Evaluation of Instruction). All three are smart enough to know that SEIs are biased in a variety of well-documented ways and, when they consist of 7 fairly subjective questions, offer little of statistical significance. They are also smart enough to know that the same institutions that don’t invest in faculty also tend to think things like SEIs are acceptable forms of assessment and even development. (The kind of professional development I have in mind would not only include funding for travel to conferences but also funding a teaching resource center staffed by people with a real focus on educating college-aged students as well as funding of teaching pairs and training for faculty on how to be better assessors of their own, and their colleagues’, approaches to teaching.)
So add up some scores, calculate an average, and include that in graphic on a slide. Put another way, it doesn’t matter how meaningful, or meaningless, the number is, so long as it is a number.
And that probably reveals my attitude toward a milestone like 1000 reads, because … is it? Is it really 1000 reads? Or is it 1000 downloads? Or 1000 views of a page from which you could download the text, which is how Academia.edu seems to work. (More on that later.)
Google Scholar's Citations per Year (Click to embiggen.)
Google Scholar seems to hew to the more conventional approach to counting things by counting only citations. However it is not entirely clear how they are arriving at those counts: is it only from materials also deposited in/with Google Scholar? I ask because, to be honest, the portfolio of my materials with Google Scholar is not complete. Nor is it complete with ResearchGate nor Academia.edu. Perhaps worse: the make-up of the portfolio on each is different, with Google Scholar having older materials, Academia having more conventionally humanities materials, and ResearchGate getting more computational materials.
In all honesty, there was no principled division of materials because I have never been quite sure which one was worth investing the effort to upload everything, and, really, my goal was to put everything in a GitHub repository. (I’m still working on this.)
In a better world, researchers could post their open access, or otherwise being made accessible, materials in a repository of their choosing and then these sites/services would simply index things there. That has not happened and it’s my guess that that is not going to happen. Academia wants you eventually to invest in a premium membership, like LinkedIn, and Google simply wants to keep you in the GooglePlex. ResearchGate seems to be able to remain in the “if you get enough users the money question will answer itself” phase. Their “About Us” pages hints at perhaps eventually rolling out a job search service or … something.
In the mean time, we got numbers. And I guess you could put them in a slide deck. Or an annual performance evaluation. Or something!
Earlier today I was watching a documentary on a climactic event in the middle ages that, as they so often do, changed the course of history. The documentary’s argument relied heavily on one particular scholar, who was quite compelling, but early in the film’s diegesis it pans across the books in his study as a way of establishing his bona fides, as if to say: “Look here. He’s read all these books. He must be smart.” It’s a trope really, one that the past two years of video conferencing has made commonplace. I tend to kinda zone out during these moments, but the glazing of my eyes was halted when the camera panned to the complete set of Stith Thompson’s Motif Index of Folk Literature. (I think they are out of order.)
The Motif Index (Click to embiggen.)
Well, you don’t see that every day, and I was delighted and paid that much more attention to what followed.
Later, I noted what I had seen to fellow folklorists on the social media platform where we gather, and which is now relentless in its delivery of ads. Of late, a number of those ads have featured decks of storytelling cards or “storyteller tactics.” I can’t comment on the content of these things, because they are extravagantly priced: Fabula’s Storytelling Cards retail for $150 and Pip Decks’ Storyteller’s Bundle starts at $190 for just the digital deck and goes to $250 for the physical deck to be included. (For more, see Fabula and Pip Decks.)
What I can say is that folklore studies missed the boat when it comes to commoditizing the indices.
Escaping Criticism by Pere Borrel del Caso (Click to embiggen.)
I was recently asked to present to graduate students in our program some ways to develop their “net presence.” Given the many possibilities, and how fragmented the internet feels, I decided to distinguish between three categories/sites of activity:
sites that feature or index your work as a scholar/scientist
sites where you communicate more personally and directly about yourself and the reason for your work
sites where you network
There are perhaps, probably, other categories or kinds of sites, and if anyone has any recommendations, please feel to make revision suggestions or create your own list and let me know to point people your way.
Sites that Feature or Index Work
There are a lot of (too many) places that collect scholarly work either for direct distribution (Academia.edu or ResearchGate) or to point to places where it can be found (Google Scholar or ORCid). In the case of the first two, they offer ease of use, centralization, and tracking and compiling of download statistics: both are fond of telling you how often you have been searched (and found) or how often a paper has been downloaded or cited. This is much more feedback than you get from a site you host yourself, or even from a lot of journals (at least in the humanities).
Sites that Offer a More Personal or Direct Vision
As convenient as indexing cites are, they are not great places to host things like personal, research, teaching, and inclusivity statements. Some may let you post a vita, but you will need to update that and upload it regularly. And while many of the sites provide you a place to have contact information, they often prefer to keep you inside their walled garden. Finally, few of them offer you the chance to explore possibilities like making other kinds of materials available, like blog posts.
When it comes to personal websites, academics have a built-in SEO advantage: by having a university point to our website, it gets a bump in most page rank algorithms. With a little enterprise, one could link to a website from a Twitter account or an Instagram account or some place like Medium. Speaking of Medium, one could easily post there, and it might even be possible to build a small portfolio site there. (I have no idea how well that would work, but I wouldn’t be surprised if someone on Medium has written about using Medium as a kind of web host.)
Once upon a time, and to this day, I pay for a shared hosting service that allows me to run a custom WordPress site – the software is freely available from WordPress.org, but most people I know use WordPress.com. It’s free for a basic site, but if you want to own your own domain, things begin to get a bit expensive. There are other services like WordPress, e.g., SquareSpace, that offer a fairly easy user experience and a kind of one-stop shop for functionality. (WordPress has, I think, achieved the Microsoft Word moment in software where it can do almost anything well enough but at the cost of doing anything really well.)
There are alternatives to WordPress: if I were starting over, I would look closely at Tumblr. I know Tumblr has a reputation for weirdness, but the fact is that it is now run by the same people who run WordPress, and you can point a custom domain at your account and you can actually present a custom website (or at least a customized one).
I haven’t explored it yet, but the Humanities Commons lets you create a site of some kind. If you’re a member already, you would be a fool not to try it out. It could be everything you need.
Finally, if you want to geek out, there is the option that I used for this website (and this blog), GitHub pages. Editing markdown files isn’t for everyone, but there is a certain satisfaction in knowing that everything you create is contained in plain text files and easily downloaded. You could even create files within the GitHub repo interface in your web browser. (I’m working on notes on how I set up this site and my teaching site that I will post a little later this spring.)
Sites Where You Network
And then there’s always establishing a presence in some sort of venue like Twitter or Mastodon (default to hcommons.social if you think you’re going to stay in the academy) for microblogging. (I suppose Instagram and/or TikTok have their academic regions as well, I’m just not familiar with them.) If you are thinking about leaving the academy, or even if you aren’t but you like to have your bases covered, setup an account on LinkedIn. It’s klugey, to my mind, but it seems to have a steady presence in the corporate world. (And remember to link to your website!)
What to Present …
That’s a whole lot about where you can go, and it assumes that you already have a variety of what’s to present: essays, papers, syllabi. But as my colleague Maria Sever notes, “Thinking about how to target [a] site to multiple audiences (academic and non-academic) is [a] struggle.” To be honest, I have changed directions on what exactly I publish on my blog a couple of times.
When I first began back in the mid-aughts, I published anything and everything that came to mind. I used it like a blog. I captured not only research notes but also things my kid had said or done. I enjoyed the cross-talk between academics and technologists, but those “bloggy” conversations dried up as social media platforms emerged and became easier for more people to use and also as blogs themselves became places where people made a name for themselves and/or became publishers. As that happened, I posted less personal stuff and more professional stuff. A lot of my early experiments with Python got picked up by places like CNRS or Duke Library. I began to consider my words more carefully before I posted them, and soon my publishing stream dwindled.
I let the website lie fallow for a while, especially after I get by a copyright claim — a fallacious one that eventually got dropped (I’ll tell that story one day), but as my time with Army drew to an end and I knew I wanted to get back to academia, I also knew that I wanted to have as many conversations as I possibly could, and I wanted to try things out that I could then show others how to replicate. For that reason, I decided to shift from WordPress to GitHub pages.
But in keeping with what I thought was the best part of keeping a blog, all those notes, I decided to still keep a blog, but just do so internally. I bounced around a number of apps, but in the end I decided I would go with an app I already had and used, Devonthink. I had recently decided to pare down the number of apps I use and focus on using them more deeply. Devonthink is just one of those apps that rewards you for your investment. (I tried Obsidian and a bunch of the other currently hot PKM apps, but Devonthink offered me the least friction and the most functionality — web clipper? Check! Markdown editor with preview? Check. Easy import of files? Check! Easy export? Check. Done.)
So, for now, I have an internal bloggy kind of note-taking setup, and an external GitHub blog-like public site. I post to Twitter and Mastodon now and then. I can’t keep up with all the networking sites. I don’t know about you, but I have things I want to read, ideas I want to explore, and explorations I want to write about.
I don’t know that the term enshittification adds anything to the debate, but I understand why Cory Doctorow is trying to come up with a term that encapsulates most of the dynamics of platform capitalism – a term I had not encountered before but find really useful. The essence of Doctorow’s argument is that platforms begin by being “good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.”
Most readers are familiar with the idea that “if you aren’t paying for it, then you’re the product.”1 Doctorow builds on this idea to demonstrate how, repeatedly, platforms like Amazon, Facebook, Youtube, and now TikTok have first ensured a user base; then captured businesses seeking to sell to those users – or, in a Russian doll moment, influencers seeking to build a following so that they can sell themselves to businesses; and then having captured the entire ecosystem, proceed to raise the rent on the now-dependent businesses. In Amazon’s case, they actually auction off placement in search results, driving up business costs, which raises prices to consumers, but then allows Amazon, which sees all, to offer their own products in the more profitable segments at lower prices precisely because they do not charge themselves for placement.
Doctorow’s hope is that eventually this amounts to self-strangulation. My take is that, like Mark Twain, the news of Facebook’s imminent demise is exaggerated. Moreover, as it dies, and for however long it dies, Facebook will still make an enormous amount of money.
I wanted to credit the originator of this truism, if there was one. Like a lot of such sayings, this one does appear to have an origin but it has been, like any folklore artifact, been refined over the years. According to Quote Investigator, the saying began as “Television Delivers People,” the title of a short video by Richard Serra and Carlota Fay Schoolman which pointed out that television “consumers” were in fact what was consumed by advertisers. In an interested development, Claire Wolfe wrote “Little Brother Is Watching You: The Menace of Corporate America” in 1999, where she anticipated the rise of surveillance capitalism: “Perhaps because you’re not the customer any more. You’re simply a ‘resource’ to be managed for profit.” Readers of Doctorow will, I think, note that the title of Wolfe’s essay is also the title of one of Doctorow’s most successful novels, Little Brother. (For more from Quote Investigator: https://quoteinvestigator.com/2017/07/16/product/. And, FTR, 1973 is also the year that Soylent Green came out, for those familiar not only with the plot and its most famous line, the resonance with Serra and Schoolman’s project will be quite compelling.) ↩
A friend of mine pointed me to reporting by Futurity on the discovery of Ajami, a form of Arabic script modified to capture local spoken languages in Africa. It has apparently been in use for quite some time, and undermines claims by colonial operatives that natives were illiterate. Far from it, as it turns out! Across West Africa people had taken the Arabic script that came with the spread of Islam and used it to capture the sounds of the languages they spoke, thus making it possible to record a variety of information. Even better: it appears that the effort to document Ajami has been institutionalized. (My training in linguistics is largely autodidact, so I don’t know what the technical term is for this. A kind of creole perhaps? At the very least a hybrid written language?
I’ve seen a number of almost/somewhat generic researcher positions that I think humanities majors with some computational abilities could not only manage but make a real difference. The convergence of quantitative and qualitative approaches not only to the research which is often the focus of these jobs but also the ability to communicate that research and to adapt to changing circumstances is something we could be really developing in university programs.
Here’s one such position with the details of the particular organization removed, but I have seen so many positions like this that this almost feels like a template that’s been adapted:
Summary
The Researcher position supports the execution of research initiatives in support of organizational objectives and to the progression of the HR profession by producing content for publication online and in print. This position is focused on researching all things work, worker and workplace and may include topics such as human resources, workplace economic impacts, diversity, equity and inclusion, racial injustice, and worker benefits.
Responsibilities
Research & Data Analysis
Conduct primary and secondary research on work, worker, and workplace topics
Design and execute concurrent research studies from start to finish, including defining objectives, developing research plans, designing, and programming surveys, analyzing results, summarizing findings, and creating final deliverables
Adapt research practices to meet business needs
Content Development
Produce well-researched content for publication online and in print in compelling and creative ways to foster and elevate customer understanding and help achieve product and/or business impact.
Develop related content for multiple platforms, such as websites, email marketing, product descriptions, videos, and blogs.
Utilize industry best practices and familiarity with the organization’s mission to inspire ideas and content.
Project Management, Collaboration, & Communication
Organize schedules to complete drafts of content or finished projects, collaborating with team members within deadlines and ensure timely delivery of materials.
Act as a brand ambassador, ensuring that all projects fit the client’s style and voice
Communicate and collaborate internally and externally to support the research functions to bring high quality proposals, reports, deliverables, presentations, etc. to fruition and help bridge alignment within and across diverse teams to meet business objectives.
Qualifications
Experience
2 years of professional experience in research, project management, or data analysis
Proven time management skills, including prioritizing, scheduling, and adapting as necessary
Proficiency with computers, especially writing programs, such as Google Docs and Microsoft Word, Excel, Outlook, and PowerPoint
In “The End of Social Media and the Rise of Recommendation Media,” Michael Mignano describes the transformation of many so-called social media platforms into recommendation media platforms (Mignano 2022). He also argues that this is for the better: it will give users a better experience. Many responses to his essay point out that Mignano, like a lot of tech wonks, misunderstands what ordinary people saw in social media: they were interested in the social dimension, with the media piece being interesting because it made it easier to share a variety of forms of information.
There are a couple of things to add to this discussion, I think, the first of which is that one wonders what social media might have looked like if it wasn’t based on the usual American media model of being funded by advertising. What if Facebook had been a subscription service like Netflix? The need to generate revenue by being able to sell users to advertisers meant that businesses had to make “sticky” content. To compete in a media landscape that can only be described as over-saturated, almost all those businesses found that fear and anger worked … or at least provide easy on-ramps, after which they could use a myriad of technologies developed by decades of market-focused psychological research to create addictive experiences.
All of this in an effort to turn social users into media consumers, which means that people like Mignano are really just in the same old business, it’s just got a lot more levers to pull, uses a lot more data, and has as much interest in making us better people or a better nation of people as bad old industries like big oil and big tobacco.
The lead article in this news-roundup isn’t about ChatGPT at all, but rather about the current trend among state governments to ban TikTok on state-issued devices and for public universities, usually in the same states, to ban TikTok on their wifi networks. The ostensible, and perhaps actual, reason for doing so is because of the data that TikTok can, or does, collect on its users with the additional factor of TikTok’s unclear relationship to/with the CCP-run Chinese government.
To be clear, governments, and their publics, should be concerned about data collection by social media platforms, as well as all other businesses and organizations, including themselves.1 Given the amount of data currently already available, what more the Chinese government, or any other entity, needs to know about each individual American citizen is really a matter of finer strokes of the brush.
Here’s a partial account of the data already out there:
YEAR
PLATFORM
ACCOUNTS
DETAILS
2018
Instagram, TikTok, YouTube
235 million
profile name, real name, profile photo, likes, age, gender, +
2018
Facebook
30 million
everything
2018
Facebook
419 million
IDs, names, phone numbers
2019
Facebook
540 million
IDs, comments, likes, “reaction data”
2019
Facebook
533 million in 106 countries
IDs, phone numbers, “other info”
2021
LinkedIn
500 million
full names, email, phone numbers, workplace information, +
Given this data, and the ability for an entity with the will and means to do so – and the means to do so amounts to sufficient computational power and data storage, each of which still gets cheaper every year – the ability to generate custom material that addresses a user with the correct form and content to get inside their information bubble is now entirely not only imaginable but feasible.
When you add in the ability to run A/B testing to see what works, and how well it does (and to whom the user passes on the package), and what does not work, functionality which already exists on almost all social media platforms, you have the ability to deliver with remarkable precision exactly the package you want delivered.
This is something I explored with the Army over the last two years, but with the rise of ChatGPT, and other generative AIs, it has begun to creep into public discourse that we are facing a new landscape, even now, as glimpsed in a recent report for Yahoo Finance, which notes “90% of online content could be generated by AI by 2025.”
For the record, I think Kevin Roose, also writing for the NYT has the right approach: it makes me feel a little sorry for younger people that so much of the world as they will encounter it will be generated for them, but not necessarily of their choosing.
The mantra, which should be a policy (or even a law?), for any organization should be not to collect any data you are not prepared either to spend inordinate time and sums of money protecting or are prepared to lose. ↩
A little over a year ago Cory Doctorow echoed out to a larger audience a report by Nature on Carl Malamud’s development of “a full-text-searchable index of 100,000,000 scientific articles.” The catalog contains 355 billion words, and returns five-word snippets and citations in response to queries. It’s publicly available for all to mine and search.
I prefer to keep things simple, so when I am working in/on CSS I tend to use the more limited palette of named colors precisely because they are named and not a hexadecimal sequence.
At some point I knew I needed to account for the two years I spent working for / in / with the Army. Army Dayz offers a chronology with some reflection. I also have notes on topical / intellectual matters that are, I hope, worth thinking about.
Scholarly XML is an extension for Visual Studio Code with a validator and autocomplete for features typically needed by academic encoding projects. It checks if XML is well-formed, validates a file when you open or modify it, makes schema aware suggestions for elements, attributes, and attribute values, shows documentation from schema for elements, attributes, and attribute values when available, and wraps selected text with tags using Ctrl+e. Most importantly, it does not require Java!
For those interested in the various abstractions about the “shape of stories” post Freitag’s triangle (or pyramid), I sat down one day to try to graph three of the more popular circles currently, er, circulating.
If you’re in need of free to use, and possibly free to adapt – what the legal types call derive – images and possibly audio, there are two places you should definitely bookmark:
Smithsonian Open Access encourages downloading, sharing, and reuse of its millions of 4.4 million 2D and 3D digital items from their collections, with the promise of more to come. This includes images and data from across the Smithsonian’s 19 museums, nine research centers, libraries, archives, and the National Zoo. They note there’s no need to ask for permission.
Yale Center for British Art: http://britishart.yale.edu/collections/search.
The Lewis Walpole Library: http://images.library.yale.edu/walpoleweb/ usually allows free reproduction inside scholarly books and journals.
Rijksmuseum (change to English): https://www.rijksmuseum.nl/en/rijksstudio offers public domain, free to use.
Welcome Library: http: //wellcomeimages.org/. Public domain, free to use: amazing range of subject matter beyond medicine and science.
The Folger Shakespeare Library: http://luna. folger.edu/luna/servlet/FOLGERCM1~6~6. Pretty good policy about reusing material inside scholarly books and journals.
At LACMA, look for images marked “Public Domain High Resolution Image Available” – many from 18th century: http://collections.lacma.org/
http://www.metmuseum.org/research/image-resources#scholarly & via Images for Academic Publishing at ArtStor: http://www.artstor.org/content/collaborations
NYPL has some lovely digitized pieces from 18th century, believe it or not, and all public domain: http://digitalcollections.nypl.org/.
Wikimedia Commons - includes notes about public domain images to identify them for use. For example: https://commons.wikimedia.org/wiki/File:Jean_Sim%C3%A9on_Chardin_The_Monkey_Antiquarian.jpg.
Digital Public Library of America http://dp.la/.
British Library on Flickr https://www.flickr.com/photos/britishlibrary/ Public domain images that they allow people to use are on their Flickr account.
Fisher Library in Toronto only charges for reproducing the images in digital format: very reasonable rates.
The PIMS in Toronto has an amazing collection: http://www.pims.ca/the-institute/directory-e-mail-and-telephone-contacts.
If like me you found yourself in need of the PDF-cropping abilities of Briss, but have faced the wall that is Java on Apple Silicon, fear not. Homebrew has your back. For those not familiar with Home-brew, it is a package manager, much like the venerable MacPorts – which I used for years to manage my installation of Python before switching to Mini Conda. All three are package managers which make it easy to install a variety of shell programs and Python, and other scripting language, libraries on your computer. All three work on a Mac. Home-brew also works on Linux, and Miniconda, like its larger sibling Anaconda, works on all three current OS platforms: macOS, Linux, Windows.
The mid-1970s was a golden time for Saturday morning science fiction, with ARK II and Land of the Lost combining with the animated Star Trek series to fill millions of American children with hopes for not only a technological future, but one where progress (and justice) were part of the fabric of the future. We were encouraged to think this way because the original Star Trek continued to play at various hours of the week in syndication. Shows like Logan’s Run and, for those who had access to public television, The Prisoner warned us that the future always bubbled with dystopian possibilities, but we were largely prone to ignore it, especially when in 1977 the original Star Wars promised us a small band of individuals who saw the importance of justice could strike a blow against the larger force that denied it.
We loved these shows, perhaps in no small part, because of the promise of a meaningful existence for all who wanted it. Children of baby boomers that we were, we had seen our parents work, succeed, but also become ever so slightly hollow. It’s not a coincidence that a few years into the next decade, John Mellencamp would have a hit with his song “Pink Houses,” pointing out that the promise of America had yet to be delivered evenly – not unlike William Gibson’s observation 20 years later that “the future is here; it’s just not evenly distributed.”
So when we piled into cinemas set in suburban strip mall parking lots, we had seen the weeds poking up through the concrete as we dropped our bikes to buy an afternoon matinee ticket. We were pre-teens, teens, and kids in our twenties, and while we hadn’t seen much of the world, we had glimpsed things and we had an adolescent sense of justice, right but usually for the wrong reasons.
We had been encouraged to see things this way by a generation of novels and films that often featured outlaw heroes, anti-heroes who did the right thing, sometimes for obscure reasons, and who regularly punished for their efforts or, when there was a reward, it amounted to them surviving at the end of the story. There was no romantic resolution, no settling down to create something larger in Sergio Leone’s Man with No Name trilogy. Having done what he could, Clint Eastwood’s character survived the end of the film to ride away. nothing more.
We sensed without really knowing that there was in the works a backlash to the progress we longed for: in a few years, Reagan would be president and ketchup would be a vegetable in our school lunches. We were living through the moment in which the first of what would become big box stores were knocking out local small businesses: the shopping malls to which we biked were anchored by TG&Ys and K&Bs or Kmarts.
We had a sense, then, from both our slightest of experiences as well as from the steady stream of fiction we read and watched that the establishment was not easily defeated, and that there were more facets to the establishment than were readily discussed by our parents at the evening dinner table. We gathered from All the President’s Men and Three Days of the Condor that there was more to politics and government than was admitted to in press releases. We gathered from The Godfather that money and politics were intertwined, and we knew from listening to our parents of the compromises they had to make regularly to get along.
And so when the original Star Wars came out in theaters, and I mean the original original, we fell in love with Hans Solo precisely because he shot first. Here was a man caught between two bureaucracies, the Empire and the Hut mafia, just trying to make his way. He didn’t, from what we could see of the Millennium Falcon, live the high life. He was making it, but just barely. He didn’t even have big dreams: he wasn’t mooning, like Luke, for the future we all mooned for. He just wanted to get through his day and be alive at the end of it.
And so when some creepy green creature squeezed into the booth at the cantina and threatened Solo, we idolized him for seeing the mortal danger and moving to take it out first. It was precisely what we wished we could do, dispatch the technocrats who regularly hammered on us to make sure we completed our prep work before clocking in or who made us clock out before taking out the cardboard boxes at the end of our grocery store shift or who asked us to look the other way while they snorted coke or while they skimmed a little off the top for themselves.
What did we get for our compliance? Nothing. Based on the lines slowly being etched into our parents faces year by year, we suspected that this would be our fate, always to acquiesce to the corporate creepsters who worked the system, any system, only for their own personal gain and somehow also looked good to management.
We were, we realized, doomed to witness the success of assholes over decent people, and it chafed. When Han shot Greedo, we cheered because it was a well-deserved death of an asshole and the ordinary joe was the one who got to do it. It felt triumphant. It felt more triumphant, if I am being honest, than the destruction of the Death Star, because it felt like a personal victory, precisely the kind we never actually ever experienced and worried we never would.
So, in 1977 Han shot first, and it was a win for the little guy, but that kind of win could not be allowed to stand. The kind of morality that power needs to keep us compliance is the kind where you can only strike back if you have been struck first. To strike back as you are slowly being starved … well, that’s a no-no.
And so, as George Lucas himself transcended from a little guy with a dream to a big man with an IP stack that sold everything from action figures to Christmas specials, he became a bureaucracy himself. And one thing a bureaucracy cannot abide is independent action, especially if that action is to stymy one of its operatives. (And, let’s face it, the death of an operative is no more than an inconvenience from a bureaucracy’s point of view: Lucas’ portrayal of Jabba the Hut’s indifference to Greedo’s death is spot on.)
With his newfound position at the top of a merchandising empire, a word I use quite purposefully, it was inevitable that Lucas reached back to re-write history. In doing so in 1997, he let slip what we all knew: Lucas’ sympathies now lay more with the Empire or the Hut than the rebellion or simply guys trying to make it through another day.
The most riveting scene in John Wick is the moment when the Russian mobster Viggo Tarasov tells us John Wick’s story:
John is a man of focus, commitment, sheer will… something you know very little about. I once saw him kill three men in a bar, with a pencil. With a fucking pencil. Then suddenly one day he asked to leave. It’s over a woman, of course. So I made a deal with him. I gave him an impossible task. A job no one could have pulled off. The bodies he buried that day laid the foundation of what we are now. And then my son, a few days after his wife died, you steal his car and kill his fucking dog.
The scene of course stands out from much of the rest of the film in that it is an extended dialogue between two characters who, at least nominally, care about each other. It’s also, as it happens, a father telling his son a fairy tale. (It’s not the right fairy tale, since the film’s writers confused the Russian word for boogeyman, babayka, with the more famous Baba Yaga, but, hey, given licenses for screwing up folk tales in various fictions, it’s acceptable and the series makes up for it by later having Wick check out Afanasyev’s Russian Folk Tales.)
The film needs these quiet moments of talking because it needs breaks between choreographed scenes of violence. What distinguishes the Wick movies from others is that the quiet moments are not light banter but allusions to a prior history that the characters share. Much of what we learn about Wick over the course of the film, and its sequels, is about what he used to do, with the clear indication that this is who he used to be.
The distinction between what he does and who he is is important. If John Wick was only an assassin, especially an assassin working for criminal elements, he would hardly be a sympathetic character. Instead, he is a retired assassin. He is, as Tarasov’s chronicle reveals, more than what he does, which was revealed to him when he met his soon-to-be-wife.
We know from the chronicle, and comments made by other characters, that John Wick was very good at this job. And so, having been a man of action for a very long time, Wick discovers love and retires. And then he loses his wife, who leaves behind a living memory of her in the dog, which Tarasov’s son proceeds to kill, initiating the chain of events, and by that I mean mostly a series of action choreographies, that resolve when everyone, well pretty much, is dead. The movie, the first one anyway, ends with something of a coda, with Wick finding another dog and, having killed the people who killed the memory of his wife, staggers into the distance with a replacement memory that will, it is suggested, perhaps allow him some peace.
The parallels to Reeves’ own history are, of course, quite compelling, with the man behind John Wick having lost a partner and, if not retiring, at least being retiring, or reticent, in general. And so, at least in the case of the first film, one wondered if Keeves wasn’t himself working through things and, perhaps, seeking to put his action-figure days behind him.
In featuring a reluctant protagonist, especially a reluctant protagonist who was once a man of action called into action against his will, the film comfortably fulfills a fairly standard American trope, which has featured in everything from The Equalizer to Gran Torino — though Eastwood perhaps did it best in Unforgiven.
The strength of the reluctant hero narrative is in his knowing the price that must be paid for violence. With that knowledge, the audience understands, and is in fact thankful for, the burden that the hero takes up. We cheer him on because he is doing the dirty work for us — the other side of this particular trope is either individuals or organizations that do such “wet work” despite a lack of gratitude, which featured more in the 70s and 80s than in the present moment.
The resolution is for the reluctant hero to retire once again, as happens at the end of John Wick. That feels right. And it felt right up until John Wick 2 was announced. And then JW3. And now JW4. When you begin to wonder “exactly how many times is this guy going to get dragged out of retirement?” or, at least, “exactly how unkillable is this guy?” At some point, John Wick ceases to be a reluctant hero and is simply the man he used to be, a relentless killing machine, and this is a far less sympathetic character. And yet, as the sequels make clear, there are sympathies to be had.
In all honesty, writing about film is something I have only ever done offline. Reading John DeVore on various film, as well as his take on various facets of American culture, have perhaps made me braver than I should be to try to add to the conversation.
As structuralism / grand theory re-emerges in the context of the humanities, I remembered that years ago I had compiled a small reader focused on Lévi-Strauss. I have scanned the items below to (OCRed) PDF if anyone is interested any of the individual pieces or the group as a whole. (I would link directly to the PDF[s], but these items are still under copyright, and I want to keep to fair use.)
Boon, James. 1985. Claude Lévi-Strauss. In The Return of Grand Theory in the Human Sciences, 159-176. Ed. Quentin Skinner. Cambridge University Press.
Lévi-Strauss, Claude. 1995. Myth and Meaning. Schocken Books.
Lévi-Strauss, Claude. 1971. The Deduction of Crane. In Structural Analysis of Oral Tradition, 3-21. Ed. Pierre Maranda and Elli Köngäs Maranda. University of Pennsylvania Press.
Lévi-Strauss, Claude. 1996. The Story of Lynx. Tr. Catherine Tihanyi. University of Chicago Press.
A list of books I loaned out years ago, and apparently never got back, reminded me of some beloved non-fiction books that I am considering re-purchasing:
Trevor Corson’s Secret Life of Lobsters: How Fishermen and Scientists Are Unraveling the Mysteries of Our Favorite Crustacean
Hayden Carruth’s Sitting in: Selected Writings on Jazz, Blues, and Related Topics
And one book listed as simply as Stonework, and while I remember the small paperback well, I do not remember more. Perhaps it is Charles McRaven’s Stonework: Techniques and Projects.
It’s good to be reminded that things are not easy when it comes to things like analytics. For those already deep into it with well-established setups, it’s easy to forget how hard-won that setup might have been.
I was handed just such a reminder today when I decided that, as part of my effort to re-build my website using GitHub Pages – after a decade and a half on WordPress – that instead of waiting for the web infrastructure to build the site so I could check changes, I would run it locally. This may strike many GH Pages users as obvious, but I was genuinely trying to develop a code-free website that I could then share with colleagues and students to get them started using a text editor and Git. What’s more gratifying than a website? Instant publishing.
GitHub Pages run on Ruby and use the Jekyll gem. A quick web search revealed that the best way to install Ruby on macOS was Homebrew, which is itself written in Ruby. Fine. I use conda to maintain my Python stack. It makes sense. And as an added bonus, people love Homebrew and I know it does a whole lot more.
Only that reveals that I have forgotten to install the Xcode Command Line Tools. That’s not so strange to me: when I used to use MacPorts for package management, it was the first step to install MacPorts – which became an almost annual task as Apple increased the frequency with which is released major versions of macOS.
xcode-select --install
Installation done. I re-ran the bash command above. Oops! I forgot to sign the license.
sudo xcodebuild -license
License agreed to, I re-ran the homebrew installation command again. You agree to a few things … and fail. Something about git. I do a few casual web searches that do not solve the problem until I remember to do the obvious: excerpt the error code into the search: xcode-select: Failed to locate 'git'. At some point I stumbled upon the solution:
xcodebuild -runFirstLaunch
I also ended up clearing out the previous homebrew failed installs and starting somewhat from scratch:
sudo rm-rf /usr/local/Homebrew
That’s a bit of a winding path for me, and I have a reasonable amount of patience and an almost reasonable amount of awareness, if not actual knowledge. This would be overwhelming for a lot of new users. I understand now why so many courses in which the basics are taught feature such installations as part of a class meeting. They are just so many weird things that can go wrong, and it helps to have someone who can help you troubleshoot and who can also re-assure you that eventually you will have a working installation and you will not need to worry about any of this … until next time.
From Ladislav Matejka’s “Jakobson’s Response to Saussure’s Cours”:
In the parlance of the octogenarian Jakobson, the decomposition of the phoneme into concurrent distinctive features rejected Saussure’s “linearité du signifiant” and, thereby, one of the general principles of his Cours. In spite of this rejection, it is clear, however, that in the gradual development of distinctive feature theory Jakobson’s decades-long duel with Saussure’s concept of the phoneme had played a crucial role. In fact, it is perhaps not far from the truth to claim that without Jakobson’s life-long dual with Saussure’s Cours, there would not be Jakobson’s distinctive features theory as we know it.
Matejka goes on to note that Jakobson early on rejected the absoluteness of Saussure’s antinomy between synchrony and diachrony: “every system necessarily exists as an evolution, whereas, on the other hand, evolution is inescapably of a systemic nature” (Jakobson 1928).
Matejka, Ladislav. 1997. Cahiers de l’ILSL 9: 169–176.
As part of a larger effort of getting rid of things I don’t need, which includes materials and links and notes that I have stashed all over my computer’s hard drive, I am spending Saturday night going through Safari’s Reading List. What’s worth keeping, I am saving to Pocket, and then I am deleting the rest.
Along the way, I came across Ben Thompson’s “Blogging’s Bright Future from 2 February 2015. The essay begins with what was then breaking news, that many pundits nee bloggers were lamenting the demise of the blog. Thompson’s analysis is smart as always, and he observes that many are lamenting the demise of the single blogger as blogs sought to get bigger and deliver more readers to advertisers. Thompson’s model would be the one that eventually was adopted to others and led to the rise of SubStack and Matter, among others.
There’s plenty to discuss there, but what I was struck by was the following passage:
The truth, though, is that blogging has evolved. It is absolutely true that the old Sullivan-style – tens of posts a day, mostly excerpts and links, with regular essays in immediate response to ongoing news – is mostly over.
What I liked about my blog, this blog, when I first started it was how it was simply that, a web log, a place where I kept notes that were also public, so if someone asked me something and I had already written about it, I could simply point them to the blog.
And then the blog got attention, and people were looking at it, and it was getting linked to by Ivy League libraries and national research centers, and I got too nervous to post all the things that in fact made the blog a blog for me.
And along the way WordPress went from being blogging software to a publishing platform.
And all the fun went out of it, and all the utility, too.
Chunks of what people describe as this second brain phenomenon strike me as what the blog, my blog, used to be. I don’t know if this will ever get back to that. There are some downsides to keeping things in public, but it does make me wonder about simply creating an internal blog.
If you have landed here, then you have been intrigued by the possibility of handing over having Textacy do the work of delivering subject-verb-object triples out of your text data. It can be done, but there are some nuances to making things happen.
One of the first issues I encountered was getting ValueErrors for nlp.maxlength when using Textacy’s built-in function, but if I used spaCy to create a spaCy doc everything was fine:
# Load the Space pipeline to be used
nlp=spacy.load('en_core_web_lg')# Use the pipe method to feed documents
docs=list(nlp.pipe(texts_f))# Checking to see if things worked:
docs[0]._.preview
Please note that Textacy does have a corpus object. I have not used it yet, but it looks like you could simply feed it the list of spaCy docs. It allows you to bundle metadata with the texts – I would like to see examples of how people are using it.
corpus=textacy.Corpus("en_core_web_sm",data=docs)
Spacy has built-in PoS tagging, accessing it looks like this:
# If we want to see all the nouns used
# as subjects in the test document:
subjects=[str(item[0])foriteminSVOs]subjects_set=set(subjects)print(f"There are {len(subjects_set)} unique subjects out of {len(subjects)}.")print(subjects_set)
# Get out just the first person singular triples:
foriteminSVOs:ifstr(item[0])=='[i]':print(item)
It looks like the verb “contents” – the verb phrase – contains more material than we want. If all we want is the very itself, we will need to target the last item in the verb list.
Many thanks to Daniel Kotik for the following HTML that can be dropped into a Jupyter notebook markdown cell:
<divclass="alert alert-block alert-info"><b>NOTE</b>
Use blue boxes for Tips and notes.
</div><divclass="alert alert-block alert-success">
Use green boxes sparingly, and only for some specific purpose
that the other boxes can't cover. For example, if you have a
lot of related content to link to, maybe you decide to use
green boxes for related links from each section of a notebook.
</div><divclass="alert alert-block alert-warning">
Use yellow boxes for examples that are not inside code cells,
or use for mathematical formulas if needed.
</div><divclass="alert alert-block alert-danger">
In general, just avoid the red boxes.
</div>
Please note that this essay reflects my own views and not the views of the U.S. Army, the Department of Defense, nor any U.S. agency.
In my current work, I am often myself trying to think about the differences between information and misinformation and disinformation, not because I think there is a difference in how information flows through online and offline networks but because the people with whom I work have an investment in characterizing some information as disinformation. It is not the case that I think all information is created equal: I believe in truth and facts and while a sophisticated dialogue is to be had about what is objectivity, I think there are plenty of reasons to attempt to be objective on a variety of occasions and reasons.
All that noted, the following began as a response to a question about deception and social media. It was a frustrating moment for me, because I thought I had begun to open up what the nature of information is: that it is the basis for human identity and groups and that we receive all information in a similar fashion and then evaluate it for alignment with the world as we understand it and/or want it to be. My quick exercise for this is for you to imagine something you feel reasonably sure is true and then imagine conveying that to someone else using the following construction: “I would like to inform you that … .” It’s perhaps a bit formal, but it’s entirely possible. Now, take the reverse of what you think to be true and imagine yourself saying to someone: “I would like to disinform you that … . “ It’s quite awkward. That’s because for most of us, what we pass along is what we think to be true.
1
First, I would suggest that when it comes to information, it is better to imagine it as a constant flow that circulates in and through individuals and groups. The same bit of information may pass through one individual or group and find no reception and either die there or get passed along in a truncated or peremptory fashion, but even that repetition can be dangerous: we have seen literally thousands of instances of information passed along in one group as “silly” be taken seriously by another group. The prime example of this is “Birds Aren’t Real,” which began entirely as an off-hand joke, grew into a collaborative internet fiction, and then got taken up by what amount to “true believers.”
For the record, this dynamic has long been present in the small groups that aggregate into larger groups that we call societies — another terminology for this is microcultures that accumulate into a larger culture. Folklorists have been exploring and mapping this dynamic since at least end of the second world war. Some of the initial reports on “fake news” were based on experiments conducted by Andrew Vaszonyi in the late forties. He and Linda Degh would describe these experiments, and the way information flows through small groups with different affinities, in a series of essays published in the 1970s, noting that the same information could be passed through various groups with different valences and would either die or thrive depending upon the group’s receptivity — what they termed affinity.
As has become clear, what various internet “platforms” have done, and this includes both social media and games as well as a variety of websites is to harness extant human-information dynamics for the purposes of commoditizing the humans. Information becomes for them simply a matter of holding users. The tools at their disposal are of two kinds: First, there is over 50 years of research into human psychology both in pure research to understand human nature but also as applied research to understand how to use basic human programming, both in terms of essential cognitive functions as well as common cultural functions, to capture, hold, and harness human attention to drive sales of items. Think no further than the constant re-arranging of grocery stores to take advantage of both the research into human way-finding but also how people navigate stores in particular. A second example would be incredible refinement of casinos and gaming machines.
The second tool at their disposal is the large-scale experimental structure of their own platforms. With so many people present, the ability to run A/B testing at scale and automatically is simply un-imaginable to most of us. And yet it happens every time you are on almost any site that carries DoubleClick ads, let alone a site like Amazon. There is simply too much at stake for most corporations not to be constantly testing ways to capture, hold, and monetize your attention.
So far as I can tell, our adversaries are not yet at the same level of operations as the corporations involved, but that is only from what I can tell from OSINT/my own reading. I have not been briefed into anything more. I base my conclusions on the fact that at least the Russians still appear to be in “see what sticks” mode with humans being the primary creators of content. (Interestingly, the burnout for Russian trolls is about the same as that for Facebook moderators.)
If there is going to be automation of information operations, I would suggest it will be likely the Chinese that get there first. First, they have the infrastructure of both people and computing power (and the ability to create more computing power with their own system of fabs) and second they are already sitting on top of an unbelievable amount of data, since a number of sites/services chose to use cheaper services provided by Chinese providers than AWS, Azure, CloudFlare, etc. And some sites/services actually maintained their data “in the clear” allowing easy access to the data as it transited between users and a server located in Shanghai. (One social media site stored the data unencrypted on such servers.)
On the matter of deception itself, it would appear that there is sufficient anxiety amongst a number of groups that information that enacts or articulates the anxiety is sufficient cause to receive it and transmit it again. (You can think of information flows as being like leaves riding the top of a creek: in some cases leaves collect to the side because there happens to be a small eddy there.) Folklorists, among others, have long documented the ways that legends and rumors, as well as a host of other forms (e.g., memes), are forms of projection that individuals use within various groups both to create and maintain the groups themselves. (Remember, all human relationships are largely informational in nature.)
2
Deception as a term is problematic in the current moment. The construction of social reality is a participatory, dynamic process in which information flows through individuals and the groups which they populate and becomes not simply the foundation for their reality but is the reality itself. In other words, reality is information-based. (Some might argue, and quite accurately, the economies are the true foundation, but decades of economic research have revealed the central role of information in any economic function.)
Part of the way economics works, whether in a totalitarian or a free market environment, is that those with more resources (power and money) have greater access to information and also greater capabilities to transmit information. That is, they can cast wider and deeper nets to collect information and they can also broadcast more widely and have the resources to do so for longer. This was true before the internet, and it is just as true now.
What the internet revolutionaries imagined was that it would make it possible for those at the margins of power to exchange information for personal development and that this self-enrichment would slowly make its way up the networks of individuals into communities and then into larger and larger economies such that life for all would slowly become richer, more informed, and, the thinking went, more democratic as more and more individuals connected with each other.
In any given community, there are those with more access to resources and those without, those who are more central to a group and those who are not. How communities allow people to life happily at the margins is one test for a communities resilience. In many traditional communities, some of the central spaces are actually given over to those who might otherwise be at the margins: in some Native American communities those who were uncomfortable with themselves — often because they were homosexual or transgender — were considered to have two souls, and thus have greater access to the spiritual realm. In many traditional European communities, marginalized individuals often became a group’s healer or its historian or storyteller. This model was so resilient that as European societies became larger and larger, they maintained distinct places of honor for scholars and artists.
There are other kinds of misfits within any given community as well: those with misanthropic or violent tendencies. Almost every community has had to deal with such individuals and found a way to channel or at least blunt their impact. In one south Louisiana community with which I am familiar, every girl of a certain age knew to avoid a particular man during town communal events like festivals and parades, especially when he had a bit to drink. This lore passed among young women, and it appears to have made it possible for the community function without any other intervention. (Please note that I am not saying this is an ideal, or even decent, solution to a social problem; merely that it was the one this community pursued. I have brushed up against similar situations in other research I have done — how much this kind of information makes up women’s culture in some communities is something that has been examined elsewhere. For now, from at least one point of view, we have caused a great number of individuals within our communities to pre-occupy themselves with information that, should the offenders have been dealt with otherwise, would have free up that information space for other things.)
In short, many communities had ways to isolate troublesome individuals, and that meant that the information they sought to transmit had nowhere to go. Those familiar with life BI (before the internet) will remember family reunions or other kinds of gatherings where the belligerent uncle (or aunt, but usually an uncle) or town elder would try to gather an audience, but whose pronouncements often fell on deaf ears or no ears. Well, the belligerent uncles found the internet, and have over the past decade proceeded to network and build a collaborative, if also often highly divergent, account of how wrong the status quo is.
It is so easy to do that even ordinary people with ordinary ideas but arguments on very particular things have joined in doing so. Two things have amplified this dynamic: first, networks connect and so ordinary people with particular concerns find themselves connected with cranks with universal grievances. Second, bad actors who once had to content themselves with trying to infiltrate social groups can now simply post information into these networks without having to leave St. Petersburg or wherever else they might lurk. It’s that easy.
Deception suggests agency, usually an external agent seeking to divert or corrupt someone from the truth, or at least a commonly held belief that many regard as true. The same goes for disinformation: we want to separate misinformation from disinformation because … why? Because misinformation is is incorrect or untrue information passed along accidentally or without intent to do harm where disinformation is intentional.
I would like to suggest an alternative grammatical scheme. I challenge you to think of a fact or some other reportable bit of information, and then place it within the following syntax: “I would like to inform you that … “ Now take that same bit of information, negate it, and then place it within the following syntax: “I would like to misinform you … “ Also try “I would like to disinform you … “ Both are in fact true statements are they not? And yet the awkwardness of the latter two is obvious as the words tumble out of your mouth.
While there might be mathematical theories of information, information itself is not mathematical. The negating of a negative does not make it a positive. All information is a positive: individual use bits of information to construct their realities. Information we regard as untrue or incorrect is just as useful to some individuals in their construction of reality as true or correct information. What we now find troubling is that is appears that many individuals actually prefer untrue or incorrect information because it is often easier to digest or creates a situation in which they are the heroes, or victims, of the moment. Folklorists had some sense of this BI (Before Internet), but we are not struggling to articulate how the dynamic changed due to the scale of online networks as well as the algorithmic nature of those networks, most of which are actually commercial properties in which the attention of individuals is what is sold.
In my current work, I was recently asked to explain how deception works in the current technological / social media environment. Below is my thinking so far.
1
First, I would suggest that when it comes to information, it is better to imagine it as a constant flow that circulates in and through individuals and groups. The same bit of information may pass through one individual or group and find no reception and either die there or get passed along in a truncated or peremptory fashion, but even that repetition can be dangerous: we have seen literally thousands of instances of information passed along in one group as “silly” be taken seriously by another group. The prime example of this is “Birds Aren’t Real,” which began entirely as an off-hand joke, grew into a collaborative internet fiction, and then got taken up by what amount to “true believers.”
For the record, this dynamic has long been present in vernacular communities. Folklorists were examining this after the second world war, and some of the initial reports were published in the 70s: the first use of “fake news” to describe the current phenomenon was by two folklorists in 1975. They noted then that the same information could be passed through various groups with different valences and would either die or thrive depending upon the group’s receptivity.
As I discuss in the social information systems module, what various internet “platforms” have done, and this includes both social media and games as well as a variety of websites – remember, Facebook is nothing more than a giant website in many ways – is to harness extant human-information dynamics for the purposes of commoditizing the humans. Information becomes for them simply a matter of holding users. The tools at their disposal are of two kinds: First, there is over 50 years of research into human psychology both in pure research to understand human nature but also as applied research to understand how to use basic human programming, both in terms of essential cognitive functions as well as common cultural functions, to capture, hold, and harness human attention to drive sales of items. Think no further than the constant re-arranging of grocery stores to take advantage of both the research into human way-finding but also how people navigate stores in particular. A second example would be incredible refinement of casinos and gaming machines.
The second tool at their disposal is the large-scale experimental structure of their own platforms. With so many people present, the ability to run A/B testing at scale and automatically is simply un-imaginable to most of us. And yet it happens every time you are on almost any site that carries DoubleClick ads, let alone a site like Amazon. There is simply too much at stake for most corporations not to be constantly testing ways to capture, hold, and monetize your attention.
So far as I can tell, our adversaries are not yet at the same level of operations as the corporations involved, but that is only from what I can tell from OSINT/my own reading. I have not been briefed into anything more. I base my conclusions on the fact that at least the Russians still appear to be in “see what sticks” mode with humans being the primary creators of content. (Interestingly, the burnout for Russian trolls is about the same as that for Facebook moderators.)
If there is going to be automation of information operations, I would suggest it will be likely the Chinese that get there first. First, they have the infrastructure of both people and computing power (and the ability to create more computing power with their own system of fabs) and second they are already sitting on top of an unbelievable amount of data, since a number of sites/services chose to use cheaper services provided by Chinese providers than AWS, Azure, CloudFlare, etc. And some sites/services actually maintained their data “in the clear” allowing easy access to the data as it transited between users and a server located in Shanghai. (One social media site stored the data unencrypted on such servers.)
On the matter of deception itself, it would appear that there is sufficient anxiety amongst a number of groups that information that enacts or articulates the anxiety is sufficient cause to receive it and transmit it again. (You can think of information flows as being like leaves riding the top of a creek: in some cases leaves collect to the side because there happens to be a small eddy there.) Folklorists, among others, have long documented the ways that legends and rumors, as well as a host of other forms (e.g., memes), are forms of projection that individuals use within various groups both to create and maintain the groups themselves. (Remember, all human relationships are largely informational in nature.)
2
Deception as a term is problematic in the current moment. The construction of social reality is a participatory, dynamic process in which information flows through individuals and the groups which they populate and becomes not simply the foundation for their reality but is the reality itself. In other words, reality is information-based. (Some might argue, and quite accurately, the economies are the true foundation, but decades of economic research have revealed the central role of information in any economic function.)
Part of the way economics works, whether in a totalitarian or a free market environment, is that those with more resources (power and money) have greater access to information and also greater capabilities to transmit information. That is, they can cast wider and deeper nets to collect information and they can also broadcast more widely and have the resources to do so for longer. This was true before the internet, and it is just as true now.
What the internet revolutionaries imagined was that it would make it possible for those at the margins of power to exchange information for personal development and that this self-enrichment would slowly make its way up the networks of individuals into communities and then into larger and larger economies such that life for all would slowly become richer, more informed, and, the thinking went, more democratic as more and more individuals connected with each other.
In any given community, there are those with more access to resources and those without, those who are more central to a group and those who are not. How communities allow people to life happily at the margins is one test for a communities resilience. In many traditional communities, some of the central spaces are actually given over to those who might otherwise be at the margins: in some Native American communities those who were uncomfortable with themselves — often because they were homosexual or transgender — were considered to have two souls, and thus have greater access to the spiritual realm. In many traditional European communities, marginalized individuals often became a group’s healer or its historian or storyteller. This model was so resilient that as European societies became larger and larger, they maintained distinct places of honor for scholars and artists.
There are other kinds of misfits within any given community as well: those with misanthropic or violent tendencies. Almost every community has had to deal with such individuals and found a way to channel or at least blunt their impact. In one south Louisiana community with which I am familiar, every girl of a certain age knew to avoid a particular man during town communal events like festivals and parades, especially when he had a bit to drink. This lore passed among young women, and it appears to have made it possible for the community function without any other intervention. (Please note that I am not saying this is an ideal, or even decent, solution to a social problem; merely that it was the one this community pursued. I have brushed up against similar situations in other research I have done — how much this kind of information makes up women’s culture in some communities is something that has been examined elsewhere. For now, from at least one point of view, we have caused a great number of individuals within our communities to pre-occupy themselves with information that, should the offenders have been dealt with otherwise, would have free up that information space for other things.)
In short, many communities had ways to isolate troublesome individuals, and that meant that the information they sought to transmit had nowhere to go. Those familiar with life BI (before the internet) will remember family reunions or other kinds of gatherings where the belligerent uncle (or aunt, but usually an uncle) or town elder would try to gather an audience, but whose pronouncements often fell on deaf ears or no ears. Well, the belligerent uncles found the internet, and have over the past decade proceeded to network and build a collaborative, if also often highly divergent, account of how wrong the status quo is.
It is so easy to do that even ordinary people with ordinary ideas but arguments on very particular things have joined in doing so. Two things have amplified this dynamic: first, networks connect and so ordinary people with particular concerns find themselves connected with cranks with universal grievances. Second, bad actors who once had to content themselves with trying to infiltrate social groups can now simply post information into these networks without having to leave St. Petersburg or wherever else they might lurk. It’s that easy.
Deception suggests agency, usually an external agent seeking to divert or corrupt someone from the truth, or at least a commonly held belief that many regard as true. The same goes for disinformation: we want to separate misinformation from disinformation because … why? Because misinformation is is incorrect or untrue information passed along accidentally or without intent to do harm where disinformation is intentional.
I would like to suggest an alternative grammatical scheme. I challenge you to think of a fact or some other reportable bit of information, and then place it within the following syntax: “I would like to inform you that … “ Now take that same bit of information, negate it, and then place it within the following syntax: “I would like to misinform you … “ Also try “I would like to disinform you … “ Both are in fact true statements are they not? And yet the awkwardness of the latter two is obvious as the words tumble out of your mouth.
While there might be mathematical theories of information, information itself is not mathematical. The negating of a negative does not make it a positive. All information is a positive: individual use bits of information to construct their realities. Information we regard as untrue or incorrect is just as useful to some individuals in their construction of reality as true or correct information. What we now find troubling is that is appears that many individuals actually prefer untrue or incorrect information because it is often easier to digest or creates a situation in which they are the heroes, or victims, of the moment. Folklorists had some sense of this BI (Before Internet), but we are not struggling to articulate how the dynamic changed due to the scale of online networks as well as the algorithmic nature of those networks, most of which are actually commercial properties in which the attention of individuals is what is sold.
There are a lot more sources than these, but these are the ones I consistently consider in order to keep up with the (textual) context within which the Army operates.
Serious Security Publications
War on the Rocks bills itself as the “producer of essays and podcasts by experts and/or with deep experience in foreign policy and national security issues.”
The way this work unfolds is often just through sheer aggregation. As you collect more and more examples, you begin to build patterns in your mind that emerge as intuition. Much of what people do in statistical learning is to recreate algorithmically what the brain does on its own. It often requires much more data, but that also means that claims made about the patterns are substantiated by the scale of the exercise. Nonetheless, you can make perfectly good claims based on data at a smaller scale. Do not be afraid to lean into your intuitions. Also, do not be disappointed if your first intuitions do not pan out. Inevitably, if you press on with data collection, more patterns emerge. Often those patterns and insights are the better ones because they did not come at first, when you are more likely to see what you want to see. Rather, coming later, they came in spite of you not seeing them. This is their strength.
My work in cultural analytics / folklore studies is focused on understanding the role that discourse plays in the nature and spread of online and offline texts. My principle interest is in narrative texts, in understanding how they are constructed, deployed, and received both because of the ways narrative activates our imaginations and the ways that narrative, as one of many modes of discourse, seems able to make words stick together as they travel across social networks. My focus on the somewhat larger horizon of discourse, as opposed to strictly narrative, is the outcome of years of close examination of actual vernacular texts as they passed between individuals both in face-to-face interaction and online.
While I began this work in folklore studies, I found I needed to expand the scope of my engagement in order to find those areas of overlap that exist between the humanities, the social sciences, and data and information sciences in the belief that there is not only strength in diverse perspectives and collaborations but also real opportunity to find tractable insights into larger questions and problems facing the world in which we live and work.
My current research streams converge on the nature of narrative because I am particularly interested in refining our understanding of modes of discourse so that we can more successfully not only distinguish narrative from other kinds of texts but build a better model of narrative itself. Addressing this question draws on work in folklore studies, information science, cognitive science, corpus linguistics (and stylistics), and computational approaches to the humanities and social sciences.
One strand of this work focuses on legends, all of which have long been distributed by traditional (oral) social networks, but many of which first made great leaps in distance via the first information networks constituted by regional, and later international, newspaper networks. While the project to make the historical case is still underway, contemporary manifestations in the 2016 clown legend cascade and the recurring legend of abandoned trucks highlight the ways in which off-line and on-line social networks not only amplify each other but transform the information that passes through them in ways we have not fully documented. This work established the need for closer scrutiny of legend as a form as well as the need to consider alternate methods for evaluating the way legends spread and/or saturate on/off-line social networks.
At the heart of this lies the nature, and status, of narrative itself, a much vaunted but still remarkably not well understood mode of discourse. My efforts here are to understand how individuals use narrative to shape various dimensions of the world as they understand it: time, space, the interactional order. The goal of this work is to build a computational model of narrative such that we can discern narrative from other modes of discourse and begin to understand its place within the larger stream of vernacular discourse, themselves situated within global information flows. One argument, for example, has been that the cognitive mirroring enabled by narrative helps in the spread of fake news and in radicalization processes. And yet the status of legends as narrative within folklore studies has long been subject to discussion, calling into question the narrative nature of those forms which rest upon legend, like fake news. A proper accounting of narrative within legend and fake news would go a long way to clarifying the dynamics of these phenomena.
The relevance of form to our understanding of information flows is at the heart of the collaboration with Katherine Kinnaird of Smith College. Taking TED talks as a corpus upon which we can build a set of methodologies and tests various assumptions, we have, first, established a clean data set available to anyone. Second, we examine the talks as words, performing not only the usual inspections of topics across time and domain but also attending to matters of gender and seeking to understand the relationship between TED talks, often described as “thought leading,” and information flows contemporary with them. In the process, we have uncovered profitable forms of collaboration and dialogue that we are capturing in Me Think Pretty One Day a book focused on collaboration between humanities and the data sciences (under development).
The oldest strand of work that continues to generate some activity began in the wake of the 2005 hurricanes that struck Louisiana, when I embarked upon an exploration of the relationship between culture and landscape. Driven to understand how individuals worked and lived in places dismissed as wetlands, I discovered a tradition of invention that had developed the crawfish boat, an amphibious vehicle that tumbled out of a loose network of Cajun and German farmers and fabricators. Explaining this phenomenon required actor-network theory, and I continue to refine that work as a way of engaging my home discipline of folklore studies in the necessity of re-thinking our work in light of developments in network theory.
A few months ago I mused elsewhere online that civilization would end in portals. That observation came after a period of travel in which I had not only to wade through my own organization’s portal but another organization’s portal. And I had both applied for a few jobs as well as submitted recommendations for students and colleagues, and I was portaled out. Some organizations were renting their portal from the same vendor, and so it would both recognize you and not recognize you. Somewhere in Dante’s inferno, there are portals. I am sure of it.
It’s not clear to me how much value portals bring to any organization except the appearance for management of “having done something.” This kind of check-box-ism is, of course, the first and last resort for the kinds of managers who can slow a good organization, trip a decent organization, and appear to flock to bad organizations, who, being bad organizations, cannot discern between good and bad management.
So far as I can tell, when it comes to portals, the logic of such management appears to be “the more the better.” And hapless employees are then forced to sign onto and off a variety of portals just to get the basics done. One portal for travel management. Another for travel reimbursement. Another for health records. Another by the insurer. Another by the hospital or medical provider. Yet another portal for performance evaluation.
None of these portals talk to each other, and so the chief task of the employee or patient appears to be to enter the same data over and over and over again all while juggling multiple login identities and a variety of password parameters — this site requires symbols; this site rejects symbols — and captchas — because who hasn’t fulfilled their lifetime quota of clicking on tiles that contain fire hydrants?
And none of this applies to the portals as platforms to which we subscribe which create similar amounts of drudgery for us. Take, for example, a recent interaction from ResearchGate, which emailed me the following:
And when I clicked on the link in the email, it took me to this page:
There is no reason, absolutely no reason, that that information could not have been in the original email. And if it was, I would be appreciative of the lack of friction ResearchGate offers me. Instead, I clicked, and had to log in!, only to learn this rather small fact.
From some manager’s point of view, they have created engagement. From my point of view, they’ve taken something decent and good and portalized it.
In a recent article in Inc, Maria Haggerty concludes that the single most important quality to look for in individuals who may be, or are, high performers are:
long-term commitment to a specific domain: This describes a person who is committed to making an increasing difference to one domain over a sustained period of time.
questing disposition: When confronted with a challenge, this person becomes excited and wants to pursue that challenge, seeing it as an opportunity to reach the next level of performance.
connecting disposition: A person whose instinct, when confronted with a challenge, is to actively reach out and connect with others who can help address it together.
I look at that list and think: that sounds like you are describing a researcher, or at least a research mindset.
The Stevie Awards are, according to their website, “the world’s premier business awards … created in 2002 to honor and generate public recognition of the achievements and positive contributions of organizations and working professionals worldwide.” I learned about them through a LinkedIn post about their storytelling webinar, which bills itself as:
Every business has a story, but effective business storytelling is a lot harder than it seems. Corporate storytelling has become the go-to approach for every marketer to get their brand noticed, and moreover, valued by current and potential customers.
Of course I am curious, but I also wonder where the boundary for an interest in narrative lies? There’s a good research project, perhaps a dissertation, lying there for someone to pursue: all the ways the business world uses stories, storytelling, and narrative. I once did a survey of how culture is used in the business literature, but that was a while ago.
Malcolm Glaskill’s “Quitting Academia” is perhaps a predictable entry in a genre that is well-established but still stunning for its sweep and its honesty about not everything being good in the proverbial “old days.”
In Fall 2020, UL-Lafayette is going to offer for the first time a course on Digital Folklore and Culture. I will be teaching it alongside the American Folklore course, which I have for the past few years taught as “America in Legend Online and Off,” but which I have lately adapted to “collect some data and understand it.” There is, I think, a possible sequence to be had with the two courses: with the first one focusing on participants encountering a variety of vernacular forms and, perhaps, examining them as individual artifacts, and the second course then taking on more features of a course in culture analytics, with participants encouraged to curate a small collection, perhaps even imaginable as a corpora, and then making some forays into analysis “at scale.”
It would be nice to have them as a sequence, since that would mean that the introductions — to folklore and to folklore studies — could be safely housed in the lower-level course, allowing the upper-level course to move more quickly. Given our curriculum and the way our students encounter it, that isn’t going to happen any time soon, and so if I want to try this out, I will need to discover a path that allows people to enter in at the 400-level course and not feel like they are lost.
Some part of this could be satisfied by having a module introducing folklore studies, with a focus on digital folklore forms, available. I have begun the EdX 101 course as a way to help me think through how I might structure and script such a module: they are very fond of the lecture exercise model that delivers content in short bursts that are immediately reinforced. I’m also taking Microsoft’s DAT256x: Essential Math for Machine Learning on edX, and I like that the lectures only start with a talking head but then move to a series of slides. (And I note that the slides don’t have to be great to work.)
I don’t know if I need to think through the Digital Folklore and Culture course before thinking about the introductory module, but edX has the following questions as the first project activity:
What are the ultimate aims of this course?
What do we want learners to know after taking this course? What should they be able to do?
How does this influence (a) what is taught, (b) how it’s taught, and c) how students are assessed and graded?
What are the ultimate aims of this course? Ultimately, I want participants to have a folkloristic lens as one way to look at the world. All of us will have a variety of responses to various things others say and do, and we can examine both their actions and speech for veracity — myth busting in some places or calling bullshit in others — but I would also participants in any course I teach also to be able to ask “Why does this person think they are saying this or doing this? What is their understanding of this situation?” I don’t need, nor want, participants to excuse inexcusable behavior or beliefs, but the only way I think we have of changing behaviors and beliefs is to understand what underlies them.
What should learners be able to do after taking this course? Participants should be able to identify a vernacular artifact and to begin to sketch out its possible traditional, or perhaps simply cultural, dimensions.
How does this influence the course’s design? This is the hardest question. And it needs to be answered in parts:
One of the things I have consistently done in recent courses is to turn away from textbooks and books and towards articles drawn from scholarly databases, with the hope of establishing in the minds of participants what scholarship at least looks like if not the beginning of an ability to understand how it works and how they might interact with it. What I haven’t done is discover ways to assess how well they are mastering the scholarly/scientific paradigm, bar certain parameters of the final paper. There needs to be more, smaller, assignments: a single annotated bibliographic entry, for example.
But this does not address the central topic of Digital Folklore and Culture as outlined in the previous two answers: identify vernacular artifacts and explore their traditional dimensions. This should also be a series of discrete exercises that can be assessed early, often, and incrementally.
Today marks the 33rd day of quarantine, or, rather, a state-wide policy of staying at home. Others elsewhere living under other circumstances will count a different number of days. I count 33 days since Friday, March 13, when the university where I work announced that classes were cancelled for the following Monday and Tuesday and that when Wednesday dawned, all classes would be online.
I was somewhat luckier than most. I had begun to have conversations with my students that week about what it would mean if we had to go online, and so we had made plans together, which helped, I think, the eventual deployment. I remember quite clearly working through some of the finer points of how we would conduct ourselves in my eleven o’clock class when, as class was finishing one of my students looked at his phone and announced, “Oh, it’s official. We’re going online.” (Of course, my university announced it first on Twitter, and then about an hour later sent an email to faculty.)
So, it’s been a month — well, three and a half weeks really — and I have learned a lot about teaching online, appreciating that how you gauge comprehension is a fundamental shift between the two environments. In face-to-face lectures and discussions, you have an entire range of facial expressions, gestures, and postures that reveal to you the scope and depth of someone’s understanding of the material being examined. A slight eyebrow furrow can lead you to re-state a proposition with a different set of words that raises not only that person’s eyebrows but a host of others. A different person’s posture reveals they are having a bad day or, perhaps, they haven’t prepared for class, prompting you to think about ways to re-engage them, give them reason to seize the next opportunity to examine the material for themselves, looping them back into the next discussion. All of this changes online, and the number of solutions that some learning management systems offer to assess student learning now begins to make sense — though, I confess, I continue to think that any number of them are rather unimaginative and, honestly, somewhat trivializing of any content which must pass through them.
A couple of other things tumble out of my experience of online teaching so far, the first of which is time management, which I glimpse not only through the lens of my screen but also through watching my own high-school aged child adapt to the change in circumstances. While my daughter spends hours in front of the computer, I am not entirely sure that it is an effective use of her time. That is, I think she confuses time spent staring at the screen with time spent working. I don’t think I am being unfair here, because I can be equally guilty of allowing myself the “quick break” to watch a YouTube video, sometimes educational like something from 3 Blue and 1 Brown or StatQuest but also just as likely, if I am being honest, to be the highlights from a Premier League game or a woodworking video (that I justify as avocational advancement). What my daughter lacks and what my students lack, and perhaps even I lack, is the regimentation of the varied workday. My daughter is quite clear about it: she was quite used to her day being broken up into chunks, each of which allowed her to focus quite clearly on the task in front of her, confident that there would be a change of class, a change of topic, and, perhaps, a change of pace. This kind of clear set of steps accompanied by variation is one way to be productive. As an adult I use it quite often. Indeed, I am entirely reliant now on being good at scheduling my day in a way that gives me the opportunity to focus intensely on a particular task, but often that focus is driven by the fact that it is bounded and I know that I can push because coming up at two o’clock, for example, I am going to break for coffee and a stroll into the garden (or what we would like to be a garden at some point in its stunted existence).
Finally, there is the matter of writing. No matter what I teach, I think the one thing that I can contribute to my student’s own personal and intellectual development is the ability to write well: to develop ideas, to base those ideas on clearly-defined inputs, and then to communicate those ideas, analytical or argumentative, well. If anything should be conducive to writing it’s the online environment. After all, at its base, the internet is simply bits being sent from one computer to another, mostly in the forms of words (or things like words like HTML tags). Or, put another way, much of our electronic communication, especially among my students, is based on some form of texting — the particular application/platform within which they text is less important than the fact that they exchange words so readily.
So you would think that shifting to all-online teaching would be a boon to the teaching of writing, but so many people are so anxious about writing that you actually spend considerable amount of time as an instructor giving them confidence, and that often comes in the form of one-on-one sessions before and after class, in the hallway, or in your office. I now spend a considerable amount of time inside Teams doing much the same, but it is far more difficult and takes far more time. (And, to be honest, this kind of effort is not rewarded institutionally: we have so devalued the teaching of writing that it’s really a wonder it gets taught at all.)
Henry Glassie once observed that there were two great traditions of scholarship in folklore studies, one oriented toward data and the other toward theory. In the one oriented toward data, the analyst pieces together what theory she needs in order to explain the data at hand. Done well, such studies, Glassie noted, often offered data in excess of theoretical explanation, leaving the door open to future analyses by other analysts with different theories. In the tradition of scholarship oriented toward theory, the analyst begins with a theoretical construct and seeks out data to affirm it, revise it, refuse it.
Neither tradition is better than the other, and, in all honesty these aren’t separate traditions as two poles within the domain of folklore studies, though this axis of attention surely exists in other domains as well. At least in the American tradition(s), there are “no ideas but in things.” On the whole, we tend to look somewhat askance at what we term “ungrounded” theoretical work, which we too often dismiss as “philosophizing.” (Philosophy has, of course, its own sets of objects, often the process of thinking itself, but done poorly it does open itself up to having no objects at all.)
Strangely enough, we are more likely to accept work that is at the other end of the axis: folklore studies has a long history of valuing the collection of objects of various kinds. The rationale for such valuation is often twofold: one is the notion of salvage that lies at heart of folklore studies — that the preservation of material that would otherwise be lost to history is an important act, and valuable contribution, in and of itself; the other is that such data is fertile ground for the theoretical development and model-building that will surely follow. Both facets are in fact included in the Journal of American Folklore’s charter published in the very first issue: “it is obviously more important to gather materials which may form the basis of later study than to pursue comparison with insufficient materials; especially as the collection must be accomplished at once, if at all, while the comparison may safely be postponed” (7).
Most work in folklore studies occupies the space between these two poles, with the responsibility falling upon the analyst to decide what matters more to her: the particularity of the data or the universality of the theory. Henry Glassie described himself as an analyst more interested in the former, and it is not uncommon to see folklorists, and other analysts, in fact deriving their theories from the data itself: it is simply a further abstraction from the patterns usually embedded in the data itself. How portable the derived theory is is up to readers to determine, but it is quite common for an idea first articulated in one study to get taken up in another study, and then, through the slow accumulation of citations to develop into its own theoretical nexus.
In fact, quite a few of the bodies of work that we consider to be theoretical in nature really arose because their authors felt that the data before them was either not adequately explained or not addressed at all by the theories available to them. (This might be what the beginning of a paradigm shift looks like in the humanities: a lack of explanation or a lack of coverage. Imagine, for example, being a literary critic in the 1970s interested in Monique Wittig’s Les Guérillères and having only New Criticism available to you think about/through the novel. As a mechanic friend of mine might say: you don’t have the tools for the job. In some cases, some analysts simply wait for the tools to be developed, but other analysts decide to start building things for themselves. Sometimes they continue on their own, and sometimes they are joined by others.
Or sometimes they are part of a collection of like-minded analysts who find that what they are interested in isn’t even conceivable in the current theory (or theories). This is what happened with Richard Bauman, who found himself slowly assembling the pieces of a interpretive and ideational framework that became known as “performance theory” in folklore studies, but it wasn’t long, thanks to the interdisciplinary nature of folklore studies, before it slipped its reigns and became part of conversations in disciplines focused on more traditional kinds of performances, like theater studies, or in more formal kinds of performances, like communication studies. In his observation about the two traditions, Glassie observed that Bauman was an example of someone who enjoyed collecting data but largely saw it as a way to develop, extend, or refine the theory which was his central concern in much of what he did.
And so now you find yourself as apprentice authors in a field like folklore studies, seeking to find a place to start, and more established scholars like your faculty keep giving you what seems like evasive answers which too often seem like elaborate, and occasionally articulate, versions of “it depends.”
Because it does.
It depends on what your own interest and investments are, but you also need to recognize that the axis of attention does demand that any analysis possesses both data and some theoretical orientation. Time is short in a semester, that’s a given, but the press of time sometimes results in people engaging in needless wheel-spinning because they do not have the traction that results from having a clear sense of what their data is or what their theory might be.
You can, however, use this axis of attention as a way to gauge the nature of your project, and perhaps what it requires. If you have only one or two examples of a given phenomena, and that is all you are likely to have, that means your work needs to have a very developed theoretical framework that makes those one or two data points compelling examples of some larger phenomenon. If you have twenty or thirty examples, then it is likely you will require less theoretical orientation and will spend more time in your analysis, compiling and collating materials into interesting categories and trends. (This is still small data by data science measures but fairly large data by humanities standards.)
This also means knowing your own strengths, orientations, investments, interests, and (imagined and/or hoped for) intellectual trajectory — hile we sometimes imagine it as not like those other things, the academy is a kind of marketplace of ideas and approaches, and the work you publish will mark as you as a particular kind of scholar. This is dynamic, of course, and there are plenty of scholars who have changed their research agenda, for a variety of reasons, and enjoyed a switch from one orientation to another. (And I’ve seen it go both ways, so it’s not always towards abstraction.)
This is an archived version of the page for Research that used to appear on the original website, johnlaudun.org, which ran on WordPress.
I like to make things. I make a lot of things with words, and those things get called essays or books, but I’ve also used words to make things like grants, CDs, television programs, databases, and code. (Words words words.) Here are a few things I’ve made (a complete list of such things can be found on my vita):
The Makers of Things
The Amazing Crawfish Boat is my book on how a bunch of Cajun and German farmers and fabricators invented a traditional amphibious boat. It’s the first book-length ethnographic study of material folk culture in Louisiana – really, the first ethnography in Louisiana studies since Post’s Sketches.
[caption id=”attachment_7001” align=”aligncenter” width=”400”] An Olinger Boat [/caption]
The idea for the book came in the wake of the 2005 hurricanes, when a national debate erupted about the nature of land (in Louisiana) and what it meant to re-build an American city (New Orleans). A lot of land got dismissed as “wetlands”, which, it seemed in the view of most pundits, was really not land at all. I thought it would be interesting to investigate how people in Louisiana actually imagined the landscape on which they live and work, and what I found was an amazing series of adaptations and innovations, the most iconic of which is the crawfish boat. There’s more information on the book and the project behind it.
The Shape of Small Stories
My more recent work has focused on Why Stories Matter, where I explore the shape of stories both as a form as well as an experience. From local legends about treasure to contemporary legends about Slender Man, I’m interested in how stories shape our experience of the world and how we shape the world through stories. I ground my explorations not only in my home field of folklore studies but also in contemporary work in cognitive and computational models of narrative. A lot of the work you see on the Logbook that has to do with textual analysis/text mining using Python is part of this work.
[caption id=”attachment_7002” align=”aligncenter” width=”400”]The Way Louisiana Treasure Legends Work[/caption]
Text Analytics
As I have explored the shape of stories and as I have begun to develop an understanding of ways to describe and/or analyze narrative computationally, I have begun to develop a small collection of scripts in Python that, for now, is simply known as Useful Python Scripts for Texts that is available on GitHub. Given interest in it, and my own commitment to developing a computational folkloristic that will pair well with other folklorists, like Tim Tangherlini, working in this area, I have begun to draft a larger text that describes what work can be done.
Louisiana Studies & Digital Humanities
I have done a lot of work in Louisiana studies, both in terms of producing original research but also in trying to find more ways to engage the diverse audiences interested in folk culture:
In 2003 or so, I joined the faculty and staff at the Center for Louisiana Studies. The state of the Archives of Cajun and Creole Folklore and the dream of leaping forward a technology or two provided me with the reason to write a grant to the Grammy Foundation. With those funds we made the best possible digital copy of taped recordings, and, then we used those digital copies to open up the Archives to a variety of interested individuals with a variety of purposes. We ended up with some pretty amazing results, as you can hear for yourself in the first two CDs released under the Louisiana Folk Masters brand: Varise Conner and Women’s Home Music.
[caption id=”attachment_7000” align=”aligncenter” width=”300”] The first Louisiana Folk Masters CD[/caption]
The idea for Louisiana Folk Masters was born out of a desire to make the folk culture – real folk culture and not the stuff too often served up in the popular media – more accessible. I dreamed up a series of products that would have as their basis the materials either already in the Archives of Cajun and Creole Folklore or that materials that were being generated with the Archives in mind. The CDs were just the first step. Television was next. As luck would have it, Louisiana Public Broadcasting was interested in expanding its approach to the genre of “human interest” stories. I worked with LPB on two profiles: one on Creole filé maker John Colson and another on Cajun Mardi Gras mask maker Lou Trahan. (Clickable links to the videos coming soon.)
I’ve also written grants for a number of other projects – mostly because I like to see what happens when you come up with something new and fun: what can others do with it?
Humanities Research and the Tourism Commission. While I was still involved heavily with the Center for Cultural and Eco-Tourism, the good folks from Acadia Parish came to the Center and asked for help brain-storming possible ways to improve their tourism infrastructure. We eventually proposed Rich the First Time, a media archive and database that would consist of high-quality inputs gathered by folklorists (mostly our students) that would be available for a variety of outputs.
In 2007 or so, the director of the Humanities Resources Center, the dean of the College of Liberal Arts, and I began a conversation about what it would take to support faculty and students in their research and publishing in the new era of cyberinfrastructures. We decided we needed a room full of equipment that could do anything someone was willing to dream up and try out. The Louisiana Digital Humanities Lab was born in that moment.
If you arrived here looking for the forms I created for field surveys, media logging, and archiving. (Specific links are to the Scribd pages.) You may also be interested in my collection of interview tips.
While we may never know when COVID-19 first appeared, we can definitely date the moment here in the homeland when people realized that maybe they should take it seriously. It was the day the state closed K-12 schools for the month. It was also the day that the local university decided to cancel classes for two days and then re-open as an online-only institution. That was the day the toilet paper really began to fly (off the shelves).
It was a day like any other day for me. I drove the girl to way-too-early–in-the-morning track practice, came home, had a cup of coffee, prepared for class, and went to campus. In class, we discussed our contingency plan, and even managed to squeeze in a bit of discussion about the assigned reading.
As class ended, one of my students who is an RA (a residential assistant in a university dorm) announced that he had just gotten word that the university was in fact going online. Okay, we decided, good thing we had a plan. Everyone filed out. I went upstairs and attended a webinar on alternative ways to approach grading papers. It was just me, a grad student, and the faculty member who organized it, and we had to huddle around a laptop — because the room’s equipment was, of course, not working — but we enjoyed ourselves and the physical intimacy made it feel less like a webinar and more like a conversation.
Afterwards I headed home, where I heard that the governor had announced that the state was closing all public schools until the middle of next month. Oh, I thought. Now things are going to get goofy.
I decided that the best thing I could do was grab our standing household grocery list, add a few items for a long-ish weekend, and head to the closest grocery store and get a shop in before all the parents picking up kids from school, and knowing they wouldn’t be going back for a month, decided they needed to stock up for the apocalypse.
Too late.
When I walked into the store, I didn’t really worry that the cart I grabbed was the last one: this particular store isn’t necessarily the most organized, and they are often running low on carts. And it wasn’t that crowded as I worked my way through the produce. But by the time I cleared through the meat section and was heading to the back corner of the story to pick up milk and eggs, it became clear something was weird: there was a line of carts.
As I crossed the middle aisle that runs the length of the store, I saw that the line of carts ran from the back to the front. As I continued on my way to the back corner of the store, I was following the line of carts. As I turned the corner to go forward again to the bread aisle, I was following the line of carts. The line of carts was wrapping itself around the store.
And the line wasn’t moving, only growing longer.
I looked at the handful of items in my cart, and I turned to the store employee who had his phone out to photograph the line. I apologized as I told him that I was abandoning my cart.
“No problem,” he said. “I’ll push it back into the cold walk-in.”
“Thank you.”
“You know we open at five in the morning?”
“I’ll see you then.”
And I left and came home and stayed home until the sun went down.
This Christmas my mother wanted to give my daughter a locket that had been her mother’s. A marvel of the jeweler’s craft, the locket offered a glimpse into another world: when opened, two leaves, in addition to the cover, pop out to allow four photos to be seen at once. Inside were the photos my grandmother had placed there of her husband and three children. It was easy to imagine the meaning such a magical mechanism made possible: she held her entire family in her hand, could take her world in at a glance.
Such mechanics are hardly called for in an era of smart phones with bigger screens and higher resolution and the ability to hold thousands of images than the four thumbnail-sized, grainy, black and white photos pressed so carefully into the locket, but my mother wanted my daughter to have something from her past. My daughter was struck by the artistry of the locket, but the four photos meant nothing to her. As she considered it, my daughter offered that she might wear it, but she would want to replace the photos.
My mother’s face first fell, and then the blood drained from it. After a pause, she continued as if my daughter had not responded, offering to build a shadow box for the locket, so it could be hung on the wall, since people these days didn’t wear things like lockets.
The conversation that followed proceeded awkwardly and not without hurt feelings on both sides, with my mother feeling like the past, her past, was being too quickly hurled into oblivion and my daughter feeling like she was not being allowed to live her life as she chose. My wife and I encouraged a change of conversation, but I had glimpsed in the exchange something with which I was already struggling.
For in another part of our house, there is a drawer in which I keep a handful of mementoes, a pocket watch that belonged to my grandfather and a pocket knife that belonged to my father. Both men are now gone. Having never enjoyed wearing a wrist watch, I used to keep my grandfather’s watch in my pocket, but it was replaced with a smartphone over a decade ago. I don’t see myself going back. And while I like my dad’s pocket knife, I had already in some way inherited from him the habit of keeping a knife handy by having a pocket knife of my own, and, to be honest, I like the greater number of features my Swiss Army knife possesses over my dad’s simple pocket knife.
So both the watch and the knife rest in a drawer, where, every so often I glimpse them, take them out, think about the men who once carried them, and then lay them to rest again. I may have once shown them to my daughter, but they are not a part of her world. While my father was a part of her world, my grandfather was not, having died almost a quarter of a century before she came into existence. And so these mementoes of mine are mine alone. If such objects like the watch and the knife have any meaning for her, it is in my attachment to them and her attachment to me. Otherwise, when I too am gone and she has to decide where things go, it is just as likely that the watch and the knife will be given or sold.
If anything remains, it will be the act of keeping a knife in a pocket, which brings me to an interesting intersection. I have two kinds of mementoes from my grandmothers. From my maternal grandmother, I have an Oster Kitchen Center, which I use regularly, and I also have her way of making spaghetti sauce, which I make every week for my daughter. From my paternal grandmother, I have an afghan, which doesn’t get much use, but I cook dishes I learned from her, like crawfish étouffée and gumbo. My daughter often requests these.
What I am left with in my thoughts is that the best things we leave behind are not tangible things like lockets, watches, and knives but intangible things like recipes and other such small actions, many of which don’t really strike us as an inheritance, or even heritage. I’m sure some will respond that it’s about making memories and not keeping memory objects, but I don’t know that I ever set out to make a memory with spaghetti sauce.
I guess what I want to say to my mother, and the many like her that fear we are leaving the past, their past, behind, is that you cannot determine the past for the future, only the future gets to choose that, so, if there is a lesson in this holiday moment about the gift of the past, it may very well be: you better be nice. Because if you aren’t, you may very well end up forgotten.
A number of responses to some of the fury over inconsistencies or, in some cases, betrayals of whatever we now want to call the Star Wars, er, multiverse?, have insisted that any attempt at seriousness is silly: the Star Wars franchise is for kids. The impulse to stifle discussion puzzles me: why wouldn’t you want to talk about something? Are we just supposed to consume such things as bits of mental candy, forgetting about it the moment it’s swallowed?
More to the point, the Star Wars films did not start out as children’s films. Rather, with Luke Skywalker at the center, they were quite clearly films about, and for, adolescents. The films resonated across the country, and around the world, because they spoke to an adolescent’s sense that the world was asleep, and they were trapped among the sleepers (aka adults). Freed of somnolence, we would find our life’s true calling, which would not only be meaningful to us but meaningful to others.
Like Luke, we longed, as adolescents continue to long, to learn that we are actually already part of something larger and that destiny has been waiting merely to drop a droid in our midst who will set in motion a series of events more exciting than anything we can imagine, especially those of us who grew up in American suburbs and watched our parents pull out of driveways, dozens along any given street, of a morning and back in again of an evening, while we spent our days stuffed into classrooms with too many students and not enough care, including our own. Uncle Owen was the father we had, and Aunt Beru was the mother we wish we had, patient with our own impulses to get off the moisture farm.
Once on our way, we would need only our own desire to learn and to act, and this was what made Luke so powerful in the first film, A New Hope, and which continued to draw us to him as a figure in the movies that followed, despite (or perhaps because of) his tendency to whine, which was also our own.
The fault of the prequels was to take the centrality of will away from us and to turn it into some biological predetermination: a Jedi was not something you willed yourself into becoming; rather, being a Jedi was something you were born into. For those of us lost among the shuffle of the working and middle class, we had had enough of people born into their roles. Anakin was awful precisely because Lucas had lost the plot: he was the kid in our class who had always gotten special privileges because his parents were wealthy or, worse, the teachers had anointed him as special—it didn’t help that Anakin was blue eyed and blonde. Oof, really? We all had had enough of those kids in our lives. (Why had Lucas not realized that the pairing of dark-haired Hans Solo with blonde Luke Skywalker had worked so well? We could, at least us boys, alternate between the two.)
With The Force Awakens it felt like someone at Lucas Films was getting back to basics: we had our lonely adolescent—okay, again on a desert planet because teenagers don’t get lonely on planets with plants?—and she had been literally abandoned by her parents and was left to scrounge for herself. We also had another adolescent who, forced into the adult world, recognizes the wrongness of it and chooses to escape. Fin even, like so many of us, “fakes it until he makes it,” claiming to be part of the resistance so Rey will like him.
So far, so good. And so long as that part of the story remains front and center, we will likely forgive the storytelling combine—it’s hard to call this a team when it feels too often like a machine designed to produce commodities rather than a compelling narrative—its tendency to rehash, well, just about everything. I get that Lucas didn’t anticipate making anything beyond the first film so he went ahead and put the big finish there, so we got Death Stars 1 and 2. But the Death Planet? (And one that sucks the life out of a sun?) Come on!
And while I began this essay in praise of the angsty adolescent, the power of the first film was that we had only one—we are all going to pretend like the angsty adolescence of Anakin never happened, okay? Now we have three: Rey, Fin, and Ben Solo/Kylo Ren. This is one weird triangle. (And, why not make the Kylo Ren figure feminine as well? Anakin 1 and 2 as well as young Luke have given us more than enough “I need a dad hug” for the duration of the franchise.)
The idea of a “trusted system” probably can be attributed to David Allen as much as to anyone else. Certainly the idea is his within the current zeitgeist. Even if you have not heard of him you probably have heard the ubiquitous three letters associated with him, GTD. Allen’s focus is on projects and tasks, but the idea of a trusted system applies just as well to any undertaking. For folks who type for a living, be it words in sentence or functions in a line of code, ideas are just as important as tasks when it comes to accomplishing projects. Allen’s GTD system has a response to ideas, but it largely comes down to putting things in folders.
But as anyone who works with ideas knows, sometimes you don’t know where to put them. And, just as importantly, why should you have to put them in any particular place? In the era of computation – that is, in the era of grep and #tag – having to file things, at least right away, would seem an anachronism that forces us to return to a paper era that often forced us to ignore the way the human mind words. That is, when operating in rich mode the mind is capable of grasping diffuse patterns across a range of items in a given corpus, but finding those items when they are filed across a number of separate folders, or their digital equivalent of directories is tedious work. grep solves some of that problem, of course.
I have largely committed, in the last few weeks, to using DevonThink as the basis for my workflow, because I like its UI and its various features and because it makes casual use so easy – and when I am sitting in my campus office, I need things to be casually easy.
But the more I learn about DevonThink’s artificial intelligence, the more I want to be able to tweak it, add my own dimensions to it. For example, DevonThink readily gives you a word frequency list, but what I want to exclude common words from that list? I know a variety of command line programs that allow me to feed them a “stop list”, a list of words to drop from consideration (and indeed these lists are sometimes known as “drop lists”) when presenting me a table of words and the number of times they appear in a given corpus. I am also guessing that when DT offers to “auto group” or “auto classify” a collection of texts, it is using some form of semantic, or keyword, mapping to do so. What if I would like to tweak those results? Not possible. This is, of course, the problem with closed applications.
The other problem with applications like DevonThink and MacJournal, as much as I like both of them, is that you can do a lot within them, but not so much without. While neither application holds your data captive – both offer a variety of export options – a lot of their functionality exists within the application itself. Titles, tags, etc.
Having seen what these applications can do and how I use them, would it be possible to replicate much of the functionality I prefer in a plain text system that would also have the advantage of, well, being plain text? As the Linux Information Project notes:
Plain text is supported by nearly every application program on every operating system and on every type of CPU and allows information to be manipulated (including, searching, sorting and updating) both manually and programmatically using virtually every text processing tool in existence. … This flexibility and portability make plain text the best format for storing data persistently (i.e., for years, decades, or even millennia). That is, plain text provides insurance against the obsolescence of any application programs that are needed to create, read, modify and extend data. Human-readable forms of data (including data in self-describing formats such as HTML and XML) will most likely survive longer than all other forms of data and the application programs that created them. In other words, as long as the data itself survives, it will be possible to use it even if the original application programs have long since vanished.
Who doesn’t want their data to be around several millennia from now? On a smaller horizon, I once lost some data to a Windows NT crash that could not be recovered even with three IT specialists hovering over the machine. (To be fair to Windows NT, I think I remember the power supply was just about to go bad and that it was going to take the hard drive with it.) Ever since that moment, I have had a tendency to want to keep several copies of my data in several places at the same time. Both DropBox and our NAS satisfy that lingering anxiety, but both of them are largely opaque in their process and they largely sync my data as it exists in various closed formats.
And as the existence of this logbook itself proves, I have problems with focus, and there is something deeply appealing in working inside an environment as singularly focused as a terminal shell. That is, I really do daydream about having a laptop which has no GUI installed. All command line, all the time. Data would be synced via rsync or something like it, and I would da various kinds of data manipulation via a set number of scripts, that I also maintained via Git or something like it.
Now, the chief problem plain text systems have, compared to other forms of content management, is a lack of an ability to hold metadata, and so the system I have sketched out defaults to two conventions about which I am ambivalent but which I feel offer reasonable working solutions.
The first of these conventions is the filename. Whether I am writing in MacJournal or making a note in my notebook, I tend to label most private entries with their date and time. In MacJournal this looks like this: 2012-01-04-1357. In my Moleskine notebook, every page has a day header and each entry has its own title. Diary entries are titled with the time they were begun. So a date-time file naming convention will work for those notes.
When I am reading, I write down two kinds of things: quotes and notes. Quotes are obvious, but notes can range from short questions to extended responses and brainstorming. Quotes are easily named using the Turabian author-date system which would produce a file name that looks like this: Author-date-pagenumber(s). Such a scheme requires that a key be kept somewhere that decodes author-dates into bibliographic entries. What about notes? I think the easiest way to handle this is using author-date-page-note. In my own hand-written notes, I tend to handle page numbers to citations within parentheses and pages to notes with square brackets, but I don’t know that regex on filenames is how I want to handle this.
Filenames handle the basics of metadata, in some fashion, but obviously not a lot, and I am being a bit purposeful here in trying to avoid overly long filenames. For additional metadata, I think the best way to go is with Twitter-style “hashtags”. E.g., #keyword.
Where to put the tags, at the beginning like MultiMarkdown or AsciiDoc, or at the end where they don’t interfere with reading? I haven’t decided yet? I use MultiMarkdown, and PHPMarkdown, almost by default when writing in plain text. The current exception to this is that I am not separating paragraphs by an additional line feed, which is the basis for most Markdown variants. This is just something I am trying, because when I am writing prose with dialogue or prose with short paragraphs, the additional white space looks a bit nonsensical. The fact is, after years of being habituated to books, I am used to seeing paragraphs begin with an indent and no extra line spacing. It’s very tidy looking, and so I am playing with a script through which I pass my indented prose notes and which replaces the tab characters, \t, with a newline character, \n, before passing the text onto Markdown.
Now, this system is extremely limited: it doesn’t handle media. It doesn’t handle PDFs. It doesn’t handle a whole host of things, but that is also its essence. It’s a work in progress. I will let you know how it goes. Look for the collection of scripts to appear on GitHub on some point in the near future.
Some Further Notes on a Plain Text System (2012-01-04-1657)
If you are working in plain text, you are probably still going to want some way of structuring your text, that is marking it up just a little so that you can do a variety of things with it. As I have already noted, the way that I know best is a variant of Markdown known as MultiMarkdown. But there are other systems out there: I have always been intrigued by the amazing scope of reStructuredText and I am somewhat impressed by AsciiDoc. (By way of contrast, I have always hated MediaWiki markup: it is almost incomprehensible to me.) The beauty of reStructuredText is that you can convert it to HTML or a lot of other formats with docutils. Even better is Pandoc, which converts back and forth between Markdown, HTML, MediaWiki, man, and reStructuredText. Oh my!
You can get Pandoc through a standalone installer or you can get it through MacPorts. To get MacPorts, however, you need the latest version of Xcode, which brings me to the topic of the moment: a plain text system is really founded on the Unix way of doing things, which means that your data is in the clear but you as an operator must be more sophisticated. Standalone applications like MacJournal and DevonThink, which I keep mentioning not at all because they are inadequate but because they are so good and because I use them when I am more in an “Apple” mode of doing things, are wonderful because you download them and all this functionality is built in. At the command line, not only do you assemble the functionality you want out of a variety of small applications, but in order to install or maintain those applications you need to have a better grasp of what requires what, also known as dependencies.
The useful Python script Blogpost, a command line tool for uploading posts directly to a WordPress site, is available through a Google Code project, which requires that you get a local copy through Mercurial, a distributed version control system, which is easily available … through MacPorts. There are other ways to get it, but allowing MacPorts to keep track of it means that you have an easier time getting it updated. This works much like Mac’s Software Update functionality, or the new badges that come with the Mac App store that tell you that updates are available. No badges at the command line, but if you allow MacPorts, also known as a package manager, to, well, manage your packages, then all you need to remember to do is to run update once a week or so and all of that stuff is taken care of for you.
And so to summarize the dependencies:
Blogpost -> Mercurial -> MacPorts -> XCode
Package managers, like MacPorts, only keep track of things locally, that is on the one machine on which they are installed, and not across several machines. It’s a bit of a pain to replicate all these steps across various machines, and so I now understand the appeal of debconf for Ubuntu users. I don’t quite know how to make that happen for myself, but I am open to suggestions.
Markdown vs AsciiDoc vs reStructuredText (2012-01-07-1345)
I have been using Markdown for five or more years now. It’s very easy to use, and it does what it does well: provide an easy means for writing documents that can be transformed into HTML. Because of its close ties to HTML, it has all of that markup languages limitations, which is to say it is focused on presentation and not meaning. I have tried in my own personal use, to constrain my use of underlines for the titles of works, as opposed to using asterisks, which typically achieve the same effect. Markdown in its original state does not support footnotes nor tables, MultiMarkdown and PHP Markdown Extra do, but they share in Markdown’s limitations.
I am mostly happy to live within those limits, but there are two other lightweight markup schemes out there that are worth considering: AsciiDoc and reStructuredText.
On Markups (2012-01-26-1432)
For general purposes, Markdown, as well as the other “plain text markup languages”, serves very well. I do not, however, find Markdown very conducive when I am writing either for myself or writing to think. For one, I find I do generally prefer an indented line for the beginning of a paragraph, with no blank line above or below. It’s especially useful when you are either writing through a series of short paragraphs or bits of dialogue, where the Markdown language could very well have half your screen filled with white space.
I also find that I prefer the Creole language’s use of equal signs for headers a better option than the hash signs, which it reserves for numbered lists. Using the hash sign also resolves the problem of having numbers get out of order as you write a list. Markdown of course fixes this as it converts to HTML, but you still have some confusion in the plain text original.
Now, one solution to developing my own markup language would be to fork a version of Markdown, in whatever programming language I would prefer to work in – there are versions of Markdown in Perl, PHP, Python, and Ruby (and I am sure there are more versions in other programming languages). My problem is that I have a pretty extensive back catalog of entries in my WordPress database, 1034 posts as of today, and most of them are in Markdown. I also have over 200 notes in MacJournal, most of which are by default marked up similarly.
It would be easy, I think, to write a script, using something like awk, to go through those posts and replace \n\n with \n\t. The same would be true for numbered lists – replace ^d. with #. – and most uses of the hash signs for headings would be similar.
But instead of working with over one thousand bits of text, and with no real interest in double-checking if everything came out correctly, the better solution might be to proceed with my new markup language and then simply write a quick script to change it to Markdown when I decide to make a text public.
Mind, only some of this is brought about by my current return to command line geekery. It’s also the case that my favorite note-taking application, MacJournal, cannot sync all of my devices easily. Two (or more) computers by Dropbox? No problem. iPhone and iPad … well, you can sync but only through the abomination of getting both machines on the same network, setting them up to sync, etc. This is silly. I already have my MacJournal data sitting in a DropBox account. My iPhone can connect to my Dropbox account. Sync to that.
MacJournal can’t do that. The cool new journaling application Day One can sync many devices through DropBox, but it currently cannot hold images and it does not feature tags. (I suppose one could make tags work the same way I make them work in my textCMS, as hash tags, e.g. #tag, but that only offers me searchability not my preferred way of working with tags, via browsing.) And Day One stays away from plain text files for storage, preferring a variant of the Mac OS plist for formatting entries in the file system. And, too, I have to abide by its preferred markup language, which is Markdown, and not one of my own choosing. But, its UI is quite nice.
It’s true. As a younger man, I sat in my little apartment in Bloomington, Indiana and I strummed my guitar and aspired to write songs. Before that, I remember writing lyrics in the study I had while living outside of Truxton, New York. I think I even recorded some of the songs – pieces of melodies, of lyrics, of vibes as the kids would say – on cassette tape.
Consider this lyric found on a 5 x 8 card a bit of juvenilia:
I had the carburetor cleaned and checked
With her line out she’s humming like a turbojet
Propped her up in the yard on concrete blocks
for a new clutch plate and a new set of shocks
Took her down to the carwash and checked the plugs and points
I’m going out tonight and rock the joint.