The 5‑Question Checklist That Makes AI Worth It For Small Businesses

The 5‑Question Checklist That Makes AI Worth It For Small Businesses

Scott Armbruster
December 9, 2025
6 min read

Most small businesses do not fail with AI because the tech is bad. They fail because they pick the wrong problem.

They spend money on chatbots no one uses, dashboards no one trusts, or "automation" that quietly dies six weeks after launch. The pattern is almost always the same. The idea sounded exciting, but no one asked a few simple questions before starting.

This is where a clear checklist helps.

Below is a 5‑question decision frame you can use before you spend a single dollar on AI. Use it to sort ideas into three buckets: pursue now, park for later, or politely kill.


When is AI actually worth it for a small business?

AI is not magic. It is a lever. It only helps if it pulls on something that already matters in your business.

As a rule of thumb, AI is worth exploring when:

  • The task happens often,
  • The cost of doing it manually is meaningful, and
  • "Good enough" output is acceptable, not just perfection.

If you are a mid‑career professional or owner in a 5 to 100 person company, your best AI opportunities usually sit in a few places: repetitive communication, simple analysis, documentation, routing work, and basic forecasting.

To find them, walk through this checklist.


The 5‑Question AI Opportunity Checklist

You can use this on a whiteboard in 15 minutes. Take one process at a time, then ask:

Question 1: Is this problem expensive enough to matter?

If AI solved this, what would actually change in your numbers or your calendar?

Look at three types of cost:

  • Time cost: How many hours per week or month go into this task?
  • Error cost: What does it cost when humans get it wrong?
  • Delay cost: What is the cost of waiting too long to act?

A useful rule: if solving the problem well would not move at least one of these by 10 to 20 percent, it is probably not an AI priority.

Example: A 12‑person marketing agency I worked with had account managers spending about 5 hours per week writing first‑draft reports for clients. That is 5 hours x 12 people x 4 weeks. Roughly 240 hours per month.

Once we noticed that number, it was obvious that even a 30 percent reduction was worth exploring. That is 72 hours per month they could reallocate to client calls or sales.

If you cannot point to a number that changes, the "opportunity" is probably just a toy.

Keep it if: You can name a specific time, error, or delay cost you would be happy to reduce by at least 10 to 20 percent.

Drop it if: The benefit sounds like "would be nice" instead of "this would really help."


Question 2: Is the process repeatable and reasonably clear?

AI thrives where there is a pattern, even if the pattern is a bit messy.

Ask yourself:

  • Could I explain how this task is done in 5 to 10 bullet points?
  • Do different people at my company follow a similar approach, even if their styles differ?
  • Are there examples of good and bad outcomes from the past 3 to 6 months?

If the answer is no, you do not have an AI problem. You have a process problem.

Example: A boutique recruitment firm wanted "AI to find better candidates." After a short review, we figured out there was no consistent intake process. Each recruiter asked different questions and used different criteria.

Before AI could help, they needed a simple standard: required skills, nice‑to‑haves, dealbreakers, and culture flags. Once that template existed, AI could help draft outreach messages, screen resumes, and summarize interviews.

Keep it if: You can document the current process on one page and pull 10 examples of past work.

Fix first if: Everyone does it differently and no one agrees what "good" looks like.


Question 3: Is "good enough" acceptable, or does it need perfection?

This is where many AI ideas quietly die.

AI is excellent at "good enough" at scale. It is bad at life‑or‑death precision or anything that absolutely must be correct on the first try.

Ask:

  • What happens if AI gets this 10 to 20 percent wrong?
  • Can a human review or lightly edit before anything critical goes out?

AI is a strong fit when humans can stay in the loop and act as editors or approvers.

Good candidates:

  • Drafting emails, proposals, job descriptions, and FAQs
  • Summarizing long calls, documents, or customer feedback
  • Generating variations for marketing copy or social posts

Weak candidates:

  • Final tax filings
  • Legal documents you will sign
  • Safety‑critical instructions

For high‑risk tasks, AI can still help with internal drafts or research, but it should not be the final authority.

Keep it if: A human can quickly review and fix mistakes, and the cost of an occasional error is low.

Rethink it if: The task requires near‑perfect accuracy and errors carry legal, financial, or safety risk.


Question 4: Do we have the data and access needed to make this work?

Many AI projects fail here. Not because the model is bad, but because it has nothing useful to work with.

Ask:

  • Do we know where the relevant information lives? (Email, CRM, folders, shared drives.)
  • Can we get that information into one place, or at least make it accessible through tools we already use?
  • Do we have at least a few dozen examples of the task we want AI to help with?

Example: A 20‑person logistics company wanted AI to "answer customer questions automatically." The first question I asked was, "Where do answers live today?"

They were scattered across 3 inboxes, a Google Drive, and one person’s memory. The real project was a knowledge base, not AI. Once they organized 120 common questions and answers, an AI assistant became trivial to bolt on.

AI cannot organize chaos for you. It can scale what you already have.

Keep it if: You can point to the source of truth data and you control access to it.

Pause it if: The information lives in other people’s heads or in systems you cannot connect.


Question 5: Who will own this for 90 days?

The most important question is usually the most uncomfortable.

Technology rarely fails from lack of features. It fails from lack of ownership. Someone needs to:

  • Decide how this will be tested,
  • Collect feedback from users,
  • Adjust prompts, workflows, or rules,
  • Decide whether to keep, expand, or kill the experiment.

If no one is willing to own the AI experiment for 90 days, do not start.

In small businesses, the best owner is usually a process expert, not an IT person. Someone who lives close to the work and feels the pain of the problem.

Example: At a 35‑person software company, AI customer support failed twice. The third time, they assigned a senior support rep to own the experiment for one quarter. She adjusted prompts weekly, rewrote canned responses, and flagged edge cases. Within 8 weeks, they shifted 25 percent of incoming tickets to the AI assistant while keeping satisfaction scores steady.

The tech did not change. Ownership did.

Keep it if: A named person will own results and iteration for 90 days.

Kill it if: Ownership feels like "everyone" or "IT will figure it out."


A short real‑world example

Let us run this checklist on a common idea.

Scenario: You run a 25‑person B2B service company. You are considering AI to help with proposal writing.

  1. Is it expensive enough to matter?
    • Each proposal takes about 3 hours of a senior consultant’s time.
    • You send about 15 proposals per month.
    • That is 45 senior hours per month. Even a 30 percent reduction is about 13 to 15 hours.
  2. Is the process repeatable and clear?
    • You already have a few proposal templates.
    • There is a standard set of sections. Introduction, problem, approach, pricing, timeline, and case studies.
    • You can easily find 20 previous "good" proposals.
  3. Is "good enough" acceptable?
    • You are happy for AI to generate a messy first draft, as long as a human refines it.
    • Final pricing and terms will always be reviewed.
  4. Do we have the data and access?
    • Templates live in Google Docs.
    • Past proposals are stored in a shared folder.
    • Client notes sit in your CRM.
  5. Who will own this for 90 days?
    • One senior consultant agrees to own the pilot.
    • She will track time saved and proposal win rates for 3 months.

This scores well across all five questions. It is a strong AI candidate.

Now imagine the same checklist for "AI to predict which clients will churn" when you have only 60 clients and inconsistent data. The answers would be very different.


Your 15‑minute action: Build your first AI opportunity map

Set a 15‑minute timer and do this on paper or a whiteboard.

  1. List 5 to 10 recurring processes.
    • Examples: proposals, onboarding, reporting, customer follow‑up, scheduling, basic analysis, documentation.
  2. Pick 3 that feel painful.
    • Where do you see a lot of hours, frustration, or delays?
  3. Run the 5‑question checklist for each:
    • Is it expensive enough to matter?
    • Is the process repeatable and clear?
    • Is "good enough" acceptable?
    • Do we have the data and access?
    • Who will own this for 90 days?
  4. Rank them:
    • Circle the one that scores strongest across all five questions.
    • That is your first serious AI experiment.

You do not need a full "AI strategy" to start. You need one well‑chosen problem, a clear process, and someone who owns the experiment.


Your turn

Where are you currently considering AI in your business, and how does it score on this 5‑question checklist?

Reply with one process you are thinking about, your quick answers to the five questions, and I can help you pressure‑test whether it is a strong AI opportunity or something to park for later.

Ready to Take Action?

Whether you're building AI skills or deploying AI systems, let's start your transformation today.