When ChatGPT first came out, a lot of people assumed that the new AI tool would operate much like an online search engine: just input a request and wait for your answer. However, as many users soon learned, it would take more than a few keywords to generate the type of content they needed. So, is there an art to communicating with this new form of technology? In this episode, Gene Marks and Anna Bernstein, a staff prompt engineer at Copy.ai, advise small business owners on how to write better prompts for text-based generative AI tools.
Podcast Key Highlights
What Does a Prompt Engineer Do?
- Prompt engineers design backend prompts, which are the complex and unseen prompts that users can slot their language into.
- Prompt engineers not only build AI tools and the infrastructure behind them; they also figure out how to accomplish that from a prompt side. They spend a lot of time with the language models, developing new techniques and strategies for getting them to do what the user asks of them.
- Prompt engineers primarily focus on conducting research and figuring out how to communicate with these language models.
What is a Prompt?
- A prompt is any piece of information you’re feeding into the AI tool to get something back from it.
- In the case of text to text models or large language models, prompts are what you say in the conversation that causes the AI tool to say something back.
How Did Anna’s Background Prepare Her to Work as a Prompt Engineer?
- Having majored in English and Arabic language translation, Anna had a deep understanding of how to analyze language and create very precise speech that could elicit a specific response. These same skills are what enabled her to create the necessary backend prompts for the AI being built at Copy.ai.
- Anna’s aptitude for language also enabled her to refine the tone being used by the earlier versions of Copy.ai’s language models.
How Do I Write Better AI Prompts?
- Instead of approaching AI tools like Google and just giving it a bunch of keywords, be rich and precise.
- Feel free to include adjectives in your prompts because while most language model AI are very literal, they also have a huge amount of subjective understanding of human concepts.
- The more specific you can be, the better; if you can really accurately describe what you’re doing on a subjective, artistic or creative level, the result you’ll get back is a million times better than if you simply threw general words at it.
- While it’s great to provide the AI with as much relevant context as possible, remember that you need to give that context in a coherent way so it’s really legible to the AI.
What is the Process for Writing an Effective AI Prompt?
- Start by providing the AI with the context of your copy; give it a role to play as a narrator or writer.
- Next, present the AI with a well-labeled library of resources; this section can include a blog brief or a blog outline.
- After that, define who exactly the AI is writing for; doing so will help it better understand its target audience.
- Lastly, to finish out the structure, you can take that command you started with and put it at the end of your prompt to prevent any confusion.
Will AI Technology Ever Reach a Point Where It Makes Prompt Engineers Obsolete?
- Despite potential advances in AI technology, these tools will always need a human to tell them why they’re generating a particular type of code or written content.
- Essentially, the more powerful a language model is, the more work a prompt engineer will have since these new systems will require really high quality backend prompts.
What Should Small Business Owners Know About Using AI?
- In order for the AI to carry out your prompts efficiently, you need to clean your databases.
- While newer small business owners may not be at a stage to hire a full-time prompt engineer, there are tools out there that can help to generate these repeatable AI-based or gen AI utilizing workflows for whatever they need, without them having to laboriously build it and figure out the prompts themselves.
Links
Transcript
The views and opinions expressed on this podcast are for informational purposes only, and solely those of the podcast participants, contributors, and guests, and do not constitute an endorsement by or necessarily represent the views of The Hartford or its affiliates.
You’re listening to the Small Biz Ahead podcast, brought to you by The Hartford.
Our Sponsor
This podcast is brought to you by The Hartford. When the unexpected strikes, The Hartford strikes back for over 1 million small business customers with property, liability, and workers compensation insurance. Check out The Hartford’s small business insurance at TheHartford.com.
You are listening to The Small Biz Ahead Podcast, presented by The Hartford.
Gene: Hey, everybody, it’s Gene Marks and I am back with you again for The Hartford Small Biz Ahead podcast. Thank you so much for joining us today.
Gene: We’re going to talk about AI. We’re going to be talking about prompts and generative AI and getting the most from your AI database, whether it’s ChatGPT or something else. Our guest today is Anna Bernstein. Anna is a staff prompt engineer at Copy.ai, and I know you’ve got lots of questions about all of that. We’re going to get into all of that. But Anna, first of all, thank you so much for joining. I have a lot of questions for you, and I’m glad you took the time.
Anna: Thank you for having me. I’m excited to be here.
Gene: First of all, before we go any further, because I told you before we started recording, I flash your face and your famous video up in a slide to groups that I talk to. What is the name of your video again, that’s garnered so many views? It’s like six tips for writing great prompts. Something like that, right?
Anna: Nine. We can look it up. I created the video and recorded it. The marketing team was more responsible for the copy…
Gene: It’s fine. What I will tell you guys, if you’re watching or listening to this, because this is going on YouTube as well, is that when you guys are finished watching this video, just search for Anna’s name, Anna Bernstein, at Copy.ai writing prompts. You’ll find the video. And it’s great. It’s 15 minutes long and it tells you a lot about writing prompts, which we’re going to talk all about prompts.
Gene: But first of all, let’s talk about you, Anna. You work at Copy.ai. You’re a staff prompt engineer. So first of all, tell me what is a staff prompt engineer, what is Copy.ai, and how did you arrive here at this position?
Anna: So Copy.ai is a generative text to text AI company, but also a workflow automation platform. Our main product is called Workflows. And how it works is that you can type in a natural language query. Just to put it simply, you can type in plain English the type of workflow you want, and it will just generate that workflow with all the steps. And you can then scale that workflow sort of like an Excel spreadsheet. You can just put it in as many inputs as you want and run it and scale the generative AI processes that work for you.
Anna: And in order to lead into what my job is as the staff prompt engineer at Copy.ai, also I’m head of prompt engineering, whatever, it’s the actual process of taking that natural language and on the backend generating a pre-built workflow from it with pre-built interstitial prompts that help you generate copy along the way, requires a lot of backend prompt engineering.
Anna: So the vast majority of my job is actually working on building, designing backend prompts, these sort of big, complicated unseen prompts that these can slot their language into. They don’t have to see those prompts ever. They’re under the hood, which usually chain together with multiple other big backend prompts, make the AI capable of things that it otherwise usually isn’t, like in the case of our products, building these workflows.
Anna: And so another big part of my job is not just building these tools and the infrastructure behind them, but even figuring out how to do that from a prompt side. So I spend a lot of time with the models, developing new techniques and strategies for getting them to do what I want. The majority of those aren’t relevant to front-end users. They get a bit complex and aren’t super necessary for the average user. And frankly, some of them are proprietary information.
Anna: But the main thing is that I’ve spent a lot of time, I don’t want to know how many hours at this point, figuring out how to communicate with these models. And a lot of that research is also relevant to front-end users and that type of prompting.
Gene: So you’re doing this so that the front-end users don’t have to, right?
Anna: That’s right.
Gene: And also, just to take a step back, not to insult the intelligence of our audience in any way, but what is a prompt? Explain to us what a prompt is.
Anna: So a prompt is any piece of information you’re feeding to the AI to get something back from it. In the case of text to text models, large language models, it’s just like what you say in the conversation. And then that causes the AI to say something back.
Gene: And I guess that’s just when we talk about AI, there’s a lot of different large language models out there. The most popular, that we all know of course, is ChatGPT and OpenAI. So obviously when you log into ChatGPT, and guys, if you’re watching this, you’re listening to this and you haven’t played with ChatGPT, I strongly recommend that you do because there’s a lot of business users. Another topic for another day. But the first thing it’s asking for is a prompt, it’s a question, like what’s a good recipe for turkey meatballs? Or give me an itinerary.
Gene: And I think what I’m getting from what you’re saying, Anna, is that when I ask a generative AI large language model, ChatGPT, a question, there are a bunch of other prompts going on based on my prompt that are behind the scenes, that’s trying to dig out the response for me. And it seems like that’s kind of the job that you do at Copy.ai. Am I correct in saying that?
Anna: I can’t speak for OpenAI as a model…
Gene: You can’t speak for ChatGPT, yeah.
Anna: But yes, for a product like ours that’s built on top of model providers like OpenAI, the value we’re giving is that we have these very, very complicated backend prompt structures that can take your prompts and through a series of other prompts really transform it into something useful for you.
Gene: Got it. All right, that makes sense. Anna, tell me, when you were a little girl, did you look up at your parents and say, “Mommy and Daddy, one day I want to be a prompt engineer at Copy.ai?”
Anna: That’s exactly what I said. And they said, “What are you talking about?” Well…
Gene: Well, that gets me to my next question. Yeah, so I mean, no one, first of all, people in 2023 don’t know what a prompt engineer is, although now it’s becoming a lot more, you know, I’m sure you have to explain to many of your friends what you do and what your title means.
Gene: But it makes sense, and it’s a perfect example of a job, one of many that AI is creating. You hear in the news all the time of AI getting rid of jobs, and it will get rid of jobs, but it will create many jobs as well, like yours. So what’s the training? My son is a mechanical engineer. You’re not an engineer, engineer, I guess, like a mechanical or a software engineer. What is your background or what is your training?
Anna: Yeah, I, when I…
Gene: Where’d you go to college?
Anna: Sorry?
Gene: Where’d you go to college?
Anna: I went to Macalester College in Minnesota. I studied English. I also studied Arabic language translation. Please don’t ask me to speak in Arabic. My Arabic is a mix of toddler Arabic and medieval poetry Arabic. But I’ve always been fascinated with language, with the analytical side of language. I think writing poetry, you are creating these, it’s a little bit like you’re creating these bizarre but very, very precise speech acts sort of, that are meant to elicit a response, in that case from a reader.
Anna: But when I’m writing prompts, I often employ the same part of my brain, at least, where I’m trying to elicit a specific response from the AI through, in some cases kind of convoluted things. But yeah, my background was in historical and sometimes biographical research, and on the literary side, working on literary journals and that type of thing.
Anna: And then from Copy.ai’s side, I think it’s useful to say where they were coming from. Our product used to look really different back in 2021, and there were these prompts for each tool, and they were written by our co-founder, one of our co-founders, Chris Lu, who’s great. And he is a very forward-thinking person. And he had this idea of our product can be better, and I think what’s going on is the prompts. I think it’s actually the prompts that aren’t good enough. And this was actually, it was a really forward-thinking thing at the time.
Gene: What an innovative thing to think about, right?
Anna: Yeah. And it was like, what if everyone had just started, the term prompt engineering had just started being used. And he was like, what if there was a role of prompt engineer? And they tried a lot of different people in that role from different backgrounds. None of them quite worked out. And then I met one of the other employees there and he was like, “You’re smart. You’re a language person. Your brain works in a weird way that might be useful for this. Do you want to give this a shot?” So I started on contract with them. They were like, “Can you fix tone?” And I was like, “I can try.”
Gene: I get asked that question all the time, “Can you fix tone?” I wouldn’t even know what to do if someone asked me that question, but here you were saying, “I can try,” right?
Anna: Yeah. So it was a lot of trial and error. And then I did that and was hired full time. And now there’s a lot of voices about how to do prompt engineering well, and there are more and more papers about advances in prompt engineering, which I certainly keep up with. At the time that I was starting, so much of my job was giving us an edge through coming up with new prompting strategies. And actually, that’s still a big part of my job.
But there was no rule book. People are like, “How did you learn this?” And I’m like, “I taught myself.” There was nothing, not that there was nothing to learn, but there were certainly no formal resources on it that I…
Gene: Sure. Yeah, you’re on the bleeding edge of this stuff.
Anna: Yeah. So a lot of it was just time sort of monk-like spent with these models.
Gene: And I imagine your background, though, is one that, particularly getting back to when you were writing poetry and crafting a small piece of narrative, you’re spending so much time on each word and trying to draw out the meaning or better meaning or better context for each single word that you’re working with, which is exactly I guess what you need to do when you write a good prompt, is you have to give a great amount of thought to the words that you’re using.
Anna: To the words, to the structure, to the landscape you’re setting up with it. Yeah. Yeah. There’s lots of…
Gene: Fascinating.
Anna: Yeah.
Gene: Yeah, it’s fascinating. Okay, so let’s talk about writing prompts as it is now. Just full, you work for Copy.ai, and each company is different, each large language model is different. I get that. But I guess I’m asking more in a, you know, as general as you can be as far as advice that you give. I do have a lot of clients and I do have a lot of people that I’m speaking to now, industry association business owners, and they’re starting to play around a little bit with ChatGPT is the big, that’s their first…
Gene: And obviously we’ve got coming down, very shortly we’ll be having Google Duet for Workplace and Microsoft Copilot. So there’s going to be a lot of, and even in Microsoft Copilot, which is being introduced in phases, but the way it’s working with Excel is one that you have giant spreadsheets that you can ask questions of it to give you back what product lines are showing the best profit margins, that kind of thing based on that data.
Gene: And I think as their capabilities expand and then as the language models that they can reach into, like counting systems and those kinds of things, you’re going to have a lot more businesses really not only relying on these generative AI products, but more importantly they’re going to need help extracting data from that and asking questions. So what advice would you give to a Luddite who is starting out in ChatGPT and they’re asking it a question and they’re not getting the answers that they need? What have you learned about writing a good prompt?
Anna: Yeah, I mean, I think there are certain tasks that are more information finding, like asking a question, and then there are tasks that are more generative, like write me X, Y, Z thing. The general advice, the most important advice I can give for front-end prompt engineering is to be rich and precise. I think a lot of people approach it like they’re interacting with Google, where they’re kind of just shouting keywords at it. And I want you to think about that as if you were talking to a human being and you were doing that, and they were like, “Whoa, okay.” And that’s not going to give you the best result. You want to be a little more human with it.
Anna: At the same time, you want to be extremely precise. It’s not quite like interacting obviously with a human being in that, you know, one of my favorite comparisons is Amelia Bedelia. It is so hyper literal, it’s human-ish, but it’s very, very, very literal. And so you really have to be careful about exactly what you’re saying. And then in terms of the richness of it, the sort of human side of it, one of the wonderful things about these tools is that they do have some amount of, they have a huge amount of subjective understanding of human concepts. So it’s like a robot, but it’s like a robot that knows what the word effusive means. And you can go ahead and pile on adjectives and whatever to give it this better rounded understanding of what you’re doing.
Anna: And I actually like to think of this sort of richness as another type of precision in a way, like you’re hitting the nail on the head both in terms of the literal, logistical instructions you’re giving it and you’re hitting the nail on the head conceptually at the same time as well. And if you can really, really accurately describe what you’re doing on a subjective level or an artistic or creative level, often the result you’ll get back is a million times better than if you threw general words at it and wanted it to use its inference to do the thing that you have in your head.
Gene: Great advice. Let’s take a few actual examples so people can walk away thinking about this. Say I wanted to use, again, I’ve got to use ChatGPT, but say I said like, “Hello, ChatGPT. Can you write a blog post for me on making spaghetti and meatballs?” How would you approach that? What would be your prompts to that? Would you start out just generally like, “Hi, please write a blog post on how to make spaghetti and meatballs.”
Anna: I mean, it’s not the worst prompt in the world. I’ve seen worse. But I generally tell people if they’re really working on this prompt to use this tent-like structure, where for every human interaction we have a huge amount of context, whether that’s, even if it’s someone coming up to you on the street, someone’s coming up to you on the street, that’s context in and of itself. What are they wearing? Even getting a random email, you’re opening an email, and that’s context. The subject line is in all caps, that’s context.
Anna: So we want to give the AI the same tools of context to work with. So you want to start with like, I am trying to write a blog, or one approach that I would recommend is like I’m trying to write a blog, blah, blah, blah. There’s also the tried and true method of, an oldie but a goodie at this point. I mean, oldie, it’s like a year old, but…
Gene: I know. Nothing’s really that old.
Anna: But of telling the AI who it is, giving it a role to play. You are a well-known blog writer, food writer who loves Italian food. And you can put that either in the message space or you can put that, for ChatGPT in particular, in the custom instructions, which we also call the system prompt. And so you start with that, but you give it that context. You don’t start in media res, which is what I see a lot of people do.
Anna: And then I like to move on to what I call the resources section, which this is where I see a lot of people start. So that’s the information you have about the blog, the brief, maybe the outline for it, the audience, whatever. It’s useful to give audience. It’s useful to say, this is who I am, this is who you are, this is who we’re speaking to. And so you go through those, you kind of give that library of resources, whatever it is, very well labeled. You choose a label for each one, like blog brief, blog outline, you label it. And then I…
Gene: I got a copy. I’m sorry to interrupt you because I know you’re on a roll here, but when you’re talking about giving the brief, when you’re talking about labeling it, when you talk about giving it a voice, whatever, so I just asked a simple question, write a blog about making spaghetti and meatballs, but what I’m getting back from you is that no, no, no, your prompt is going to be much more involved than that. This language model needs more information from you. Is that right?
Anna: Yeah. I mean, I think most people for the result they’re looking for, this is a better approach. If you truly want just a generic blog about spaghetti and meatballs and have no other ideas than that, then please, maybe throw in some good, please write me a compelling blog about spaghetti and meatballs. Not the worst prompt in the world. And then there’s the next step, which is please write me a compelling blog about spaghetti and meatballs for people who aren’t super used to cooking, or maybe people who love Italian food, or whatever it is. And then you add in…
Gene: Or garlic lover.
Anna: Yeah, but to finish out the structure I was talking about, that would actually be the next part of the structure, is that you can take that command you started with and actually put it at the end of that prompt. You want to end with this final command or even reiteration of what the actual task is. So it doesn’t have to go hunting back through the prompts in terms of its attention patterns, whatever. It can just find the task at the end, know what the task is really well.
Anna: And you also in that sort of final command line, I talked about labeling the resources really well, if you have resources, if you don’t, that’s fine, but if you have included those, you want to use those same labels in the final command. So you’d say, “Use the blog brief to write a blog about spaghetti and meatballs that follows the outline I gave you,” and you had labeled those blog brief and outline.
Anna: So basically the underlying principle to a lot of this advice is that it has limited brain power per generation. And for us, we actually conserve brain power sometimes by consolidating what we’re saying in terms of write the blog about it and we already know what it is. And that doesn’t take up more brainpower for us. It just makes the sentence shorter. But for the AI, that’s just that tiny extra little bit of brainpower where it’s like I have to figure out what it is first before I do this task. So you just want to take everything that you possibly can off its plate so that it can focus on the central task that you’re giving it. So more is not always better. It’s specifically more.
Gene: You know what you remind me of is it’s like you’re talking to an assistant. I mean, if I was working with somebody and I said to them, “I need you to write me a blog on spaghetti and meatballs,” that assistant would say to me, “Okay, well, I don’t know. What tone should I use? How long should it be?”
Anna: Exactly.
Gene: You know what I mean? Who’s the audience? I mean, so…
Anna: Why are we doing this?
Gene: Yeah. Why are we doing this? Is this for a large group? Or is this for just one person who would be of interest in this? Is the person reading this from Italy or from somewhere else? I mean, all these different things that, and I guess what you’re saying is you want to try and give as much context as possible to the gen AI platform to help it, like you would be giving an assistant to do. Is that, am I saying that…
Anna: If you want to give that as much relevant context as possible, but you really want to give that context in a coherent way where it’s really legible to the AI, because there’s such a thing as too much context and there’s such a thing as giving even relevant context in a really confusing way that’s just going to put more on its plate as it tries to figure out how to use it.
Gene: Okay. Let’s take another example. Say a job description. Say I’m looking to hire a prompt engineer for my company and I’m going to put a job posting on LinkedIn for this, and I need help writing out this job description. Okay, say I went to ChatGPT, say I asked you, you’re the prompt engineer, so you’re going to ask ChatGPT. What prompt would you give to a ChatGPT to help write a job description? What more information am I not including right now in this just general request?
Anna: Well, I would probably start with giving it again, just that trick of a bit of a role, like you are a recruiter who’s an expert at writing job descriptions and you’re an expert in this field maybe. Then I would say, I am writing this job description to put it on LinkedIn, and I would maybe even add a little bit more of a motivator there, even a bit of emotional weight, like because it’s important that we find a high quality candidate for this job.
Gene: Wow.
Anna: Or that’s what we’re trying to do with this. And that could be helpful. And then I would say job details. I would not say job description because that kind of repetition, because it’s like I’m writing a job description, but the job description’s already here. So I’d say job details and then I would put the details of the job, and then I would say, “Use the job details to write an accurate and compelling job description that would appeal to high quality candidates for a role.” I’d be a little more specific than that, but yeah.
Gene: That’s great. I know we only have a few more minutes left, and this advice is super helpful. I mean, is this the kind of stuff that a typical business owner should know? Your job is very specific in writing prompts for Copy.ai and for your large language model. And like you said at the very beginning, you’ll have a customer, they’re going to put in their general, their starting prompts, but it’s your prompts behind the scenes that are taking what they’re asking for and trying to ask it better of the database. Do you think that AI will ever evolve to the point where you are out of a job, where AI will be writing its own prompts, it’ll get smart enough to do what you are doing? Is this a limited career?
Anna: If it’s ever smart enough to write the kind of prompts I have to write for it, I would love to be involved in making it that smart, but I don’t think so, only because the type of prompting I have to put together is, again, this mix of human, it’s this human precision that there’s sort of a push and pull with these models.
Anna: On the one hand, they need to be very, very precise. It’s good that they’re hyper literal for my work because I need them to do these really strange tasks. I essentially need them to generate bizarro code, and I’m walking them through reasoning for that and then generating code and all these things through these examples. So that precision is very valuable. But they also have to understand things like human motivation and reasoning because that’s part of why they’re generating code or that type of code.
Anna: And so there’s a push and pull where human communication relies really strongly on inference. It relies really strongly on well-intentioned and useful willfulness interpretation, because none of us are very precise when we’re speaking or even writing. So these models are human, but require precision at the same time in a way that I think there’s so much value there to knowing how to do that. At the same time, I used to, to go back to your question more specifically, I used to be worried that my job was going to be temporary. For the first year of it, new models would come out and everyone would be like, “This is it. It’s the end of prompt engineering.” And I’d be like, “Oh, no.”
Anna: And then I would have so much more work because there was a new model, and not only would it function differently, but it would be capable of a lot more through prompt engineering. And I think I used to, and I think a lot of people still think of it this way, that if this is the slope of what the models are capable of, and this is what we expect them to do, that prompt engineering lives in the triangle in between and is just going to get smaller and smaller.
Gene: Got it.
Anna: I also used to believe it was that way, but nowadays, I think it’s more like this, where the more powerful the model is, the more work I have because the more it’s capable of with really high quality prompt engineering.
Gene: It’s a great answer. And I just, to lead us out, I think most business owners themselves, I mean, the advice that I’ve been trying to give that I think is important, I think that, first of all, for prompts to work well, the databases have to be good. And most of my clients’ databases are not very good. I bet you Copy.ai’s databases are a lot better and getting better. So cleaning up your data is definitely a big thing.
Gene: And I don’t know if a typical small business owner would hire a prompt engineer per se for their business, but I think it’s a safe assumption, Anna, that engineers like yourself are being employed by software companies all over the world so that when we’re making requests of their databases or their generative AI large language models, there’s been work done by you to make sure that my prompt is getting utilized in the right way to bring me back the information I want. Is that a safe assumption to make?
Anna: Yeah, I think so. I think right now the role is often conflated or combined with a machine learning role. I mean, it is a machine learning role, but I see people get in touch with me trying to hire for a prompt engineer, but they also want someone who is coming from machine learning. And I think there are many good prompt engineers out there who are coming from a machine learning background, but I also think you will get a lot of maybe machine learning experts who don’t quite realize how complicated it can get with the prompting. So I see those two roles dividing more in the future, working closely together, but more dedicated prompt engineering roles.
Anna: As for, you don’t see small businesses hiring a prompt engineer, I kind of agree, but it’s almost like part of what, not to plug my own product or business again, but it’s part of what we’re doing at Copy.ai, where they shouldn’t have to. There are tools out there that help to generate these repeatable AI-based, gen AI utilizing workflows for whatever they need without them having to laboriously build it themselves and figure out the prompts themselves.
Gene: Right. It almost seems like you’re sort of a middleware. I mean, you’ve got the user interface, you’ve got the backend, and then you’ve got your role to facilitate that communication between the two, to make sure the results that come out are useful. And that’s how I look at it.
Anna: I think that was more true for earlier in our product. I would say nowadays, my work is surprisingly enmeshed in the backend.
Gene: Good, good.
Anna: Yeah. Again, there’s parts of it that need to be generated that are in natural language, and then there’s parts that need to be generated that are code, and there’s every intersection between the two. And our system moves in and out of AI generative steps that I’m in charge of, into programmatic steps that are interstitial, and then back into those generative AI steps. And it kind of has all become almost like one thing.
Gene: Anna Bernstein is a staff prompt engineer at Copy.ai, giving us lots of great information and background about what prompt engineering is all about, how it will impact our business, and even a few great pieces of advice on writing good prompts. Anna, I can’t tell you how much I’ve enjoyed this conversation, and thank you so much for joining us.
Anna: Thank you so much. This has been great.
Gene: It has been fun. Everybody, you have been watching and listening to the Hartford Small Biz Ahead podcast. My name is Gene Marks. If you need any tips or advice or help in running your business, please visit us at smallbizahead.com or SBA.thehartford.com. Thanks for watching or listening. We will see you again soon. Take care.
Thanks so much for joining us on this week’s episode of The Hartford Small Biz Ahead podcast. If you like what you hear, please give us a shout out on your favorite podcast platform. Your ratings, reviews, and your comments really help us formulate our topics and help us grow this podcast. So thank you so much. It’s been great spending time with you. We’ll see you again soon.
Download Our Free eBooks
- Ultimate Guide to Business Credit Cards: The Small Business Owner’s Handbook
- How to Keep Customers Coming Back for More—Customer Retention Strategies
- How to Safeguard Your Small Business From Data Breaches
- 21 Days to Be a More Productive Small Business Owner
- Opportunity Knocks: How to Find—and Pursue—a Business Idea That’s Right for You
- 99 New Small Business Ideas