AI: Literacy and costs

I have hesitated to enter the fray (so to speak) about writing and large language models and generative artificial intelligence (AI). My hesitation is because all of the persistent chatter and hype and exaggeration doesn’t leave a whole lot of room for contrary opinions. I am glad that I paused in writing anything about it because while the hype is still persistent, I am beginning to see a more tempered tone, especially in the world outside of higher education.

Another key part of my hesitation is connected to the fact I’m just not that interested in it. Well maybe that’s not totally true. My interest is no different from other technological changes that technical and professional communicators have had to consider. The persistent hype that this is something different doesn’t hold a lot of sway—particularly for me. If one reads the scholarship in TPC (technical and professional communication), the same angst was visible with work on word processors and then content management systems. You can see the same discussions about social media (which now know is not monolithic, but reading early analyses and investigations it seems to be a singular thing). In other words, we can track a host of technological innovations historically and see the same sort—almost the exact sort—of arguments currently circulating around AI. *

Higher ed remains in a frenzy with committees, task forces, workshops, and a push to get all sorts of things into the publication pipeline (and even a interdisciplinary new journal), while the rest of the society seems to be not as interested.

In any case, while not fully my thing, I do recognize that TPC academic degree programs need to grapple with how to integrate Ai and discussions of it into the classroom. The same approach to integration in higher ed is actually applicable to how workplace teams can approach critical discussions of the place, or not, of AI in the work of writing and communicating.

At a recent academic conference, there were a handful of presentations on AI. I said some of the things I write about here out loud at that conference. However, from those discussions, I will carry forward two important points. First, there was a panel of presentations by Chenxing Xie, Manushri Pandya, and Jinzhe Qiao on “responsible AI.” There wasn’t a clear definition of responsible (and most folks who know me know I lurv a definition), but the examples of how that played out in different courses was important work. I greatly appreciate the term “responsible” as a descriptor and explanation for an approach to its use. There seems to be an un(der)-explored value to responsible as it relates to AI.

Second, when reading responsible alongside Nupoor Ranade’s term, “augmented,” TPC has the beginning of way to both theoretically and practically talk about AI as it relates to the entire communication circuit and the entirety of the types and kinds of communication TPC practitioners undertake. Ranade’s augment highlights that AI is only an addition. It adds to existing practices and processes. How then might TPC consider responsible augmentation of AI? This is a much better question than the knee-jerk reactions of we have to using this tool.

For me this responsible augmentation starts with leaning into technological literacy, which has been a hallmark of TPC programs since their beginnings (way back in the 1950s). Technological literacy has always been keyed to what, when, why, and how of including or excluding a tool. The same is true for AI tools. technological literacy means the ability to use, understand, evaluate and critique technology (i.e., tools, platforms, apps, etc).**

I would further add that a technological literacy from a programmatic perspective is ensuring that issues of technology are embedded throughout the curricula. Students need to be made aware, particularly in regard to use, that learning how to use one tool will transfer to similar tools or how a new tool can be used in the process of work and of making. Technological literacy moves TPC beyond the surface level use to more of what could be seen as responsible augmentation. AI does really well with basic things such as the same things we’ve been using google or other search engines for now for many years. So I’ve yet to really understand what the hype is about. We’ve been doing “prompt engineering” since the day google launched.

And as Brenton Faber and Johndan Johnson-Eilola observed some 20+ years ago: Practitioners need “to respond to new technological uses—not merely new pieces of software, but new ways of working and communicating—in ways that are both fast and effective, technical communicators must routinely engage in work at both the applied and theoretical levels.” & This engagement in ways of working and communicating is crucial since AI sure can’t do the critical thinking we want students to do. Nor can it be used effectively without a human.

And it is also clear that since so much writing is context driven, that the so-so content it may generate will need heavy editing. Of course, that editing requires someone—a person—to do that work based on the complexity of the context. I chuckle when I see a tweet/skeet reference the poor feedback the tool gives on writing. Like, what?! It wasn’t meant to do that nor would it be able to.

This sort of surface level thinking about pedagogical application also misses some of the important parts of what a writing course does. When Jonathan Alexander reminds us that writing is a way of thinking, he underscores that teaching writing is not just about the deliverable, but it should also be about the act of making, which is creative, critical, instrumental, imaginative, playful, productive, and a whole host of other pairs. In a writing course, the practice and process of the doing is such a key part of learning how to write and a key part of the writing to learn about the subject of the writing. That means LLMs and AI can’t be used to skip the practice or the drafting. It can be used as part, but then students have to be able to explain why they are using and to what end. The same sort of critical reflection many in TPC already ask students to do when writing and designing.

The critical aspects of AI as a tool that I’m not really seeing discussed and needs to be both in our classrooms and our workplaces, are those associated with costs. This is a consideration to truly to understand what it means to do responsible augmentation of AI in the TPC classrooms and in the workplace.

The important role of the technical communicator has always been helping organizations consider what a thing makes in relation to the costs of that making. I find it ridiculously ironic that technical (and professional) communication has talked for years and years with so many arguments and words about the value that we add. Here is the precise moment to demonstrate that value by focusing on the tools and technologies (as they relate to communication and content) and the COSTS of those technologies. I use costs here to deliberately bring to the forefront both monetary concerns but also larger concerns—the costs—of not doing the critical work necessary about issues beyond the quick fix and productivity. How do we engage, assess risk, and measure harm as these costs are associated with learning and literacy? What at the consequences of continuing the almost blind acceptance that we have to be using these tools?

So little has been written or considered in tech comm circles (and teaching circles) about the costs: environmental costs, which include things like data centers, cables and infrastructure , WATER; see for example). For an academic field that has embraced social justice issues, I find it disconcerting that every conversation about AI does not start with environmental costs.

Labor costs is another area that seems to be lost in the AI conversation. Not enough attention is being given to how these technologies change the balance in the workplace between organization and employee and how it affects the nature of jobs, how to regulate it. The writers strike in Hollywood was a great example of the tensions that are just now being more understood. If the tech is likely to get “better” and faster what do workplace organizations need to be doing to consider how to use it ethically and effectively for their own business goals and as importantly, what does this do the idea of work and labor? The tool isn’t going to take a job. It’s going to change the job and then how that job is managed will change. That’s the conversation we need to be having.

And we’ve only done a fair (to poor) job in really talking about ethical costs. Issues of bias are well known and Halcyon Lawrence’s work will remain foundation in TPC (and others areas). However, it seems TPC is content to plow ahead with classroom use with only an ethical nod acknowledging the bias and using it anyway.  Ethical costs also include issues of propriety information and copyright and use. In the workplace, there is much more risk in using the tool to create anything because if you’re company is doing work for someone else, there are legal and ethical constraints. That means, you can’t upload company or client information, can’t upload your logo or proprietary information. But it seems everyone assumes that everyone knows this, when in fact they do not. Not to mention have we forgotten that every time we put information in the tool, that information is taken by the owners of that tool to do with it what they want?

So as this year moves forward and the initial “cool factor” starts to wane. I so hope TPC starts to engage in discussions that acknowledge the field’s own knowledge and its importance in current discussions as the field grapples with both issues of literacy and the costs associated with that literacy.

Wishing you health, peace and joy!

 

*Hell, I research historical technical communication ca. 1350-1550  and the monks were’t really excited about the printing press either.

** This definition is an abbreviated version of the extended definitional framework proposed by Marjorie Rush Hovde and Corinne Renguette (2017). Hovde and Renguette’s work does the heavy lifting for the field in summarizing the scholarship and offering the themes found within it, and calls to mind Lee-Ann Kastman Breuch’s (2002) work on technological literacy and pedagogy.

& p. 142 from Faber, Brenton , & Johnson-Eilola, Johndan. (2002). Migrations: Strategic thinking about the future(s) of technical communication. In Barbara Mirel & Rachel Spilka (Eds.), Reshaping technical communication: New directions and challenges for the 21st century (pp. 135-148). Lawrence Erlbaum Associates.