How to increase your individual productivity with free generative AI
Even if, like me, you refuse to use ChatGPT for content creation
A new study co-authored by OpenAI discusses how large language models (LLMs) like ChatGPT have the potential to significantly increase the speed and efficiency of many worker tasks.
Roles like mine, heavily reliant on programming, data analysis, and content creation, are estimated to have the highest exposure to generative AI tools. It’s not surprising that with the help of LLMs, about 20% of my tasks are being completed faster at the same level of quality.
And that’s practically just from using the free version of ChatGPT, which means I don’t have access yet to the updated model that can also browse the internet, or a tool approved to use with sensitive data.
What kinds of tasks am I automating or streamlining with ChatGPT?
For me, it’s primarily about efficiently retrieving pieces of publicly available information that fit these two conditions:
1. Comprise common knowledge that a LLM is likely to have seen numerous time during training, and therefore has low probability of being misrepresented.
2. Can be quickly checked for accuracy, or will have negligible impact if the information turns out to be wrong.
I’ve learned that tasks with these two characteristics are great for generative AI. Below are some examples that hopefully will inspire you to find similar opportunities to delegate repetitive work so you can use the time saved to focus on being a good leader, or producing higher-quality results for your creative work.
1. Explain technical concepts to a client, colleague, or mentee
Another day I was asked by a manager to urgently review a proposal document he was writing. The document had an image with the title “System State Diagram”, when in fact it depicted a process flow diagram. When preparing my feedback, instead of writing down the differences between the two diagrams, I asked ChatGPT to compare and contrast the two concepts. I then copied the answer and added a note recommending to keep the title but replace the image with one of a real state diagram, which was the right visualization for the document. Not having to search for or write down an explanation on my own made it possible for me to finish my review in the limited time I had between two client meetings.
Tasks like this are common in my job and in many other roles that involve giving feedback for other people’s technical work. For example, if it’s part of your responsibilities to teach employees how to use commercial software, approve power point presentations, or perform peer-based code reviews, it should be possible to delegate to ChatGPT many of your “explaining” tasks that don’t involve proprietary information. This is particularly true when you’re already familiar with the concepts that need to be explained, and thus can easily check the accuracy of AI-generated content.
2. Fix code
Yesterday a mentee was getting an error in a Python notebook. An online search for the error message showed that the issue was an incompatibility between the latest versions of two packages. But what if the recommended solution (downgrade one of the packages) isn’t feasible? In seconds, ChatGPT provided alternative code using a different library. We could immediately confirm that the proposed fix worked by simply running the new code in the notebook.
3. Help streamline random tasks
There are so many tasks I’m currently delegating to ChatGPT that it’s even hard to decide which examples to share.
Today I needed to write an email in Italian. Since the content didn’t include any sensitive information, I wrote it in English, then asked ChatGPT to translate for me.
(TRUE) 1. Comprise common knowledge that a LLM is likely to have seen numerous time during training, and thus have low probability of being misrepresented.
ChatGPT explains: “My strongest language is English, but I can also work reasonably well with languages such as Spanish, French, German, Italian, Portuguese, Dutch, Russian, Chinese, Japanese, Korean, and many others.”
(TRUE) 2. Can be quickly checked for accuracy, or will have negligible impact if the information turns out to be wrong.
I understand Italian reasonably well, so it was easy for me to confirm that the translation was in good shape before sending the email. If I had to write, say, in Korean, a language I don’t know, I’d find another way to validate the content. In that case I’d probably use Google Translate to revert the text to English.
The main benefit of using ChatGPT for writing content like this is that you can simulate a human conversation and ask for tweaks, like making the tone of the email less formal—something you can’t do with a tool like Google Translate.
In their current state, LLMs are great when all we need is a classic, tried-and-true answer to a question
The scenarios above were highly suitable for the current state of LLMs because none required a creative or innovative solution. What I needed was, “just the facts, ma'am.”
The same rules work with personal tasks as well. For instance, last week I wanted to make savory crepes for a quick lunch before my next meeting. Rather than search for a recipe, try to guess the best link to pick from the search results, close an annoying subscription pop-up, and scroll down a thousand words about the good memories that particular food blogger associates with eating crepes to check the ingredients, I simply asked ChatGPT.
(TRUE) 1. Comprise common knowledge that a LLM is likely to have seen numerous time during training, and thus have low probability of being misrepresented.
The model has certainly seen a large number of savory crepe recipes during training.
(TRUE) 2. Can be quickly checked for accuracy, or will have negligible impact if the information turns out to be wrong.
Since I’ve made crepes many times before, I could easily validate that the ingredients and proportions made sense, so the risk of trusting ChatGPT’s instructions was very low.
If, instead of a quick meal, I wanted a recipe that would expand my food horizons, rather than relying on a chatbot, I’d have looked for a recipe by Yotam Ottolenghi or another talented chef.
At least for the moment, generative AI is not the place to go to for things that require “the genius that is the domain of human beings.”
"I wish I could say that the advances in AI will make it easier to create hits, obviously it won't. Hits are created by genius. And data sets plus compute plus large language models does not equal genius. Genius is the domain of human beings and I believe will stay that way."And data sets plus compute plus large language models does not equal genius. Genius is the domain of human beings and I believe will stay that way.”
Take-Two's CEO Strauss Zelnick answering a question about how emerging AI tools may affect the development of games like Grand Theft Auto.
It’s possible that at some point models specifically trained on content from top chefs will become a good source for cooking exploration, but LLMs are not there yet. Take this quote from a food writer who wasn’t impressed with the recipe provided by ChatGPT for Moroccan Spiced Meatballs with Yogurt Dipping Sauce (emphasis mine):
It uses beef (something I see often in North American online recipes), not lamb, as in the cookbooks in my library. When I asked for a recipe with lamb it subbed out the beef for lamb, without adjusting any of the other seasonings. This is similar to the “one sauce to cover them all” mentality I keep running into online and in restaurants. Many of us from non-Western food cultures know that lamb and beef have different flavour profiles and adjust our spicing and cooking accordingly.
—Jasmine Mangalaseril, The Incredible Blandness of ChatGPT
This is also why, when I’m writing an opinion piece, I don’t use LLMs at all.
Wait, but isn’t creative content considered one of the primary use cases for generative AI?
It may be so, but while I’ve read about people reporting increased productivity using chatbots to generate marketing content or create article drafts from lists of talking points, that’s not the job I want to hire a chatbot to do for me.
Three main reasons for that:
When creating content for work, I am not allowed (for good reasons) to submit proprietary information to a third-party chatbot API. This limits the usefulness of LLMs, preventing document summarization, classification, etc.
When writing an article for general consumption like this one, I can’t stand the bland content I get when I ask GPT to argue a point I want to make. Moreover, trying to fact-check its plausible BS only slows my productivity.
Creating content without the help of AI also makes it much easier to have it reflect my viewpoints and/or interpretation of recent data or findings, as opposed to merely follow the patterns and information present in ChatGPT’s training data.
Of course, when you’re tired and lacking ideas, LLMs can offer a quick solution. For example, I just asked it for ideas for the theme of a birthday party for a 40-year-old who loves the movie Apollo 13. The suggestions were pretty boring, but when in a pinch…
Last thought: the goal of productivity is not to maximize activity
My purpose for increasing individual productivity is to reduce how much time I spend on drudgery or repetitive work, not to maximize activity.
By asking ChatGPT to explain a technical concept, find the fix for a coding issue, or translate content to another language, I have protected my amount of “well spent” hours thinking through new ideas and projects that lead to my best work and, more importantly, increased my downtime.
Research on naps, meditation, nature walks and the habits of exceptional artists and athletes reveal that mental breaks not only replenish attention and increase creativity, but are also a key productivity booster. The next time you use generative AI to produce quality work faster, don’t forget to use the extra time to step away from your work or routine tasks and allow your body and mind to rest, repair, and rejuvenate.