Why I Don’t Use AI for My Emails (Much)

A̶I̶ ̶E̶m̶a̶i̶l̶s̶.̶

The rise of AI writing tools and plugins such as Flowrite or Quickmail makes it easier to cut down on the flood of ever-increasing emails finding their way into our inboxes by enabling us to send sophisticated, automated responses. Finally, we can spend less time on emails and more on the things we love.

But should we?

I recently listened to an HBR podcast on AI’s generativity mechanisms (thanks for the suggestion, Ricardo!). While I agree with many AI use cases in there, it struck me when the presenter talked about a professor using AI to auto-generate email answers and let ChatGPT politely send declines to many people.

Read on to understand why I think that this example (at least currently) might not be a good idea and what you can do to answer emails in a more ethical, time-effective way.

The Problem with AI E-Mail Writing

So, Simon, why are you opposed to writing AI-generated emails?

Right off the bat, I want to make something clear: I am NOT entirely against it. I genuinely believe that AI is a useful tool for you in many e-mail writing tasks such as grammar correction, brainstorming, sentiment analysis and much, much more.

What I do have a problem with, however, is a misconception.

In the HBR podcast, the speaker talks about a busy professor who has automated his emails using some sort of automation service (e.g., Zapier) and then connect ChatGPT to this mechanism. If I understand correctly, he forwards an email to another email, triggering the mechanism. When forwarded, the mechanism is then taking the input from the email and generates and sends a reply automatically, without his input. In the prompt (prompts are the instructions you hand over to tools such as ChatGPT for example) he specifies that it should include some tips for further resources, next to a very, very polite decline reply. The HBR podcast professor is then claiming that he received wonderful emails about individuals replying to him and thanking him for taking the time to provide such a lovely response and great resources.

But we need to ask the question of why did he receive such nice replies?

My email writing set up (aka my desk) during my time in Adelaide.
Source: Simon Beuse, 2021

The replies the professor received back were of nice and thankful nature because receivers thought that he took out some of his valuable time and provided his expert input on how they can improve or where to get additional information, knowing he is a busy individual.

This is, in my view, a misconception.

If the participants were told that this was an auto-generated answer with generic resources, would they have thanked him so much?

However, we can not deny that these tools are here to stay. And they are incredibly useful. While we all are trying to figure things out with AI, this is how I currently think we could use it more ethically.

How I write my E-Mails

When I say I don’t use AI in my emails, I mean that I don’t do so without explicit declaration or because I only used it lightly, hence, not requiring this declaration.

I still use AI in writing emails for the reasons below. But if I do, and to the extent that I do, I aim to let the receiver know.

I do use AI for the following to craft effective emails:

Are you stuck? Use AI tools to brainstorm.
Source: https://i.gifer.com/eTh.gif

  • Brainstorming
    You can load up ChatGPT, Bard or other tools with the email that was sent to you to help you brainstorm difficult problems of a client or colleague. It probably won’t provide the perfect solution, but it allows you to think about other solutions. However, be mindful not to load any emails into the tool that are confidential or private, as you are transferring information to these services, regardless of how much that data is being used to train models.

  • Grammar Correction
    Grammarly is great for a quick finish of your emails – but ChatGPT can help you point out your mistakes even faster sometimes (and with less frustrating, buggy behaviour). Just make sure to retain your voice or adjust for your organization if ChatGPT makes it look too formal or out of place.
    Note: I received a Grammarly license from my uni, and the AI feature for my particular was set to “disabled” by the school. Perhaps an indicator that educators do not currently know how to respond to AI in universities today?

  • Highlighting / Structuring
    Need to send an important email to hundreds or thousands of people? You can ensure your point comes across by loading your (non-confidential!) draft into ChatGPT or Bard and asking it to emphasize certain points. You can then either manually add them or revise them with your personality.

More Tips for Writing Effective, Ethical, AI-Assisted E-Mails

The key, in my opinion, is to make the reader aware of what they have in front of them or use the tools in a non-intrusive way. Now what do I mean by that?

If you are using AI assistance in your writing – that’s okay. We all want to read emails that are concise, clear and communicate a point clearly. AI can help with that.

But we must try to avoid misconceptions whenever possible. So here is what I do:

This blogpost outlines why Germans might be better email writers.
Source: am, 2023

  • Use it heavily only with a disclaimer.
    I tend to let people know thatif I used AI heavily in my emails by putting a disclaimer at the end (e.g., “Please note the creation of this message has been assisted by AI.”). We already have “This is an automated message” at the end of automated email replies for years for sort of the same reason.

  • Use it in a minor way without declaration.
    For example, Grammarly has AI-assisted grammar correction and sentence suggestions. As these don’t significantly alter the idea or “flow” for you, I find the other side shouldn’t be misconceived. It’s similar to Microsoft Word auto-correct.

  • Use it for brainstorming.
    No idea how to respond to an email or how to counter a situation? How to open an email in an interesting manner? Use the AI tools to grab some brainstorming items or amend them.

  • Use it for sentiment analysis.
    You can ask AI tools what the sentiment of an email is (e.g. if you are unsure and the text is long enough). While this is often wrong due to little text and if the other person is a non-native speaker, for longer emails, it might be able to give you some hints. However, be aware that any information you dump into ChatGPT or alike might be used for training purposes - so don’t dump anything confidential in there.

Conclusions

I don’t remember where I heard it (it might have been the same HBR podcast mentioned above), but until AI output isn’t recognized by the masses, I can imagine we should be more aware to not deceive the receiver. For example, we question photos’ legitimacy more rigorously today because we know that they might have been altered via Photoshop. The existence of Photoshop has become ubiquitous, and pictures are frequently questioned in terms of their legitimacy. Perhaps when this is the case, we can drop the disclaimers and use them much, much more?

In the end, the only thing I am advocating for at this point is to think about the use and the avoidance of misconception. Perhaps ask yourself before sending: “Would I be mad to find out this was AI generated?” or “I would not mind if this text would have been heavily influenced by AI text generation.”.

Use clear and concise language, proofread your emails carefully and perhaps brainstorm some ideas using AI. And, if all you do is try to provide information quickly (e.g., a short LinkedIn post with photos and links), the saved time might be appreciated by readers.

By the way: This is the first time I used AI to generate a blog post outline (though I altered it 100%). The content, though, you might realize, is and always will be me.

What is your experience with AI tools in writing emails? Do you use them? If so, how? Do you agree or disagree with my thoughts above? What do you think about the misconception?

As we are all trying to figure out how to use these tools, as always, I would love to hear your thoughts.

Note: I recently had a good conversation with friends and colleagues (thanks, Vib!) discussion when to use AI for writing, and it got me thinking: When is the point where the use of AI will become “assumed” in society? Assumed meaning that you immediately think: “This might be AI-generated”. If this is the case, perhaps we should use it even more because there is little deception. I want to use it more, but I also want to be ethical. And I don’t want to be “slowed down” by thinking too much about ethics and less about possibilities. A dilemma I hope we have more insights on soon. Towards more efficient, less deceptive communication! Perhaps in my next article, my and your view on AI use has already changed. This post will probably become outdated very quickly.

Previous
Previous

Losing your AI Fear: Lessons from a Taiwanese Night Market

Next
Next

Harnessing the Power of AI: Small Businesses Tips Inspired by Fast-Food Giants