John Warner presents a thought-provoking idea: to generate original thoughts, one must first interact with the ideas of others.
Exploring Generative AI in Writing
As I worked on my manuscript, More Than Words: How to Think About Writing in the Age of AI, I delved into the world of large language models.
My main focus was on tools like ChatGPT and its successors, as well as the various iterations of Claude.
At the outset, I was eager to uncover how these technologies could enhance my writing process.
However, my journey led me down an unexpected path, prompting reflections on the broader implications of my findings.
Critical Engagement with AI Technologies
A notable concern has arisen regarding the blind acceptance of generative AI in educational contexts.
This worry extends beyond the more extreme missteps—like creating an AI that portrays Anne Frank in a way that supports Nazis.
I’m also noticing a tendency among educators and institutions to embrace this technology wholeheartedly, primarily dazzled by its novelty, without truly examining its impact on teaching and learning.
Recognizing this trend has prompted me to adopt a more critical stance.
While I agree with Marc Watkins, who suggests that AI is “unavoidable” but not “inevitable,” I approach generative AI with caution, viewing unbridled enthusiasm as potentially reckless.
I do see how it might serve educational purposes, but lingering doubts about the fundamental learning objectives persist.
My skepticism has been amplified by firsthand experiences with these language models.
In areas where I have substantial expertise, their guidance often led me astray in subtle ways.
This experience left me hesitant to rely on them for topics outside my realm of knowledge, where I could potentially overlook critical inaccuracies.
Engagement Over Superficiality
Each effort to shortcut the writing process revealed to me that I was bypassing deep engagement—a crucial component of meaningful writing.
Take, for example, my attempt to explore the intricacies of personal taste and aesthetic appreciation.
I wanted to weave in thoughts from Kyle Chayka’s Filterworld: How Algorithms Flattened Culture. Confident from my previous reading and review of the book, I sought a refresher on Chayka’s notion of “algorithmic anxiety.” When I asked ChatGPT for a summary, it responded adequately, but failed to help me contextualize that information in a way that enriched my writing.
Ultimately, I was compelled to revisit the book, realizing that true inspiration lay in deep engagement with the text rather than summarization.
This pattern has led me to reevaluate my writing approach.
While my style is inherently personal, crafting original and compelling insights hinges on expressing the foundational ideas of others, rather than relying on superficial overviews.
My journey has reinforced my belief that writing is a continuous exploration.
The initial spark of an idea ignites the writing process, yet through writing, those ideas morph and deepen in unexpected ways.
Ultimately, meaningful writing should embody this transformative journey, revealing the distinct insights of the author.
The aim should be to uncover new knowledge, offering fresh perspectives to the audience.
If a writer fails to discover anything significant throughout the creative process, the endeavor loses its essence.
My attempts to use language models for quick summaries left me feeling disillusioned, as I realized I wasn’t engaging with genuine intelligence, but rather with statistical outcomes.
The absence of a human touch made it difficult for me to draw on my own humanity.
I recognize that others might find genuine creativity in these models, yet I question the value of seeking inspiration from generalized probabilities instead of unique insights.
I take pride in the engaging and thought-provoking material within More Than Words.
However, I remain receptive to the idea of reassessing my perspectives through the views of others.
Open dialogue is essential—something fundamentally lacking in large language models due to the absence of intentionality in their algorithms.
To think otherwise is to misunderstand the situation.
While these technologies present potentially beneficial tools, they remain, at their core, somewhat misleading.
The extraordinary capabilities of AI continue to evolve, yet in my personal work, they often fall short of delivering true meaning.
Source: Insidehighered