AI Needs Forbes Training
What luxury hospitality standards can teach us about building AI that anticipates needs and removes friction.
I'm standing at the front desk of Las Alcobas Napa Valley, a newly-built luxury hotel in wine country. A guest approaches and asks where the spa is. I don't point across the courtyard and say "over there". I step out from behind the desk and walk them there myself.
It's my last year of college and somehow I've landed on the opening team of this property, despite having zero hospitality experience. I'm not entirely sure why they hired me, but I'm grateful they did. Over the next several months, I'll learn what it means to provide anticipatory service, not through grand gestures, but through a hundred micro-behaviors that become muscle memory.
Years later, as we're hurtling through the AI boom, I keep thinking about that training. Not because I'm nostalgic for walking people to the spa, but because the expectations set for me at that front desk are exactly what we should be requiring of AI.
Forbes Training
In luxury hospitality, there's a framework for how you're supposed to interact with guests. It's Forbes Travel Guide's 800+ service and facility standards, the criteria hotels are judged on to earn their Five-Star, Four-Star, or Recommended ratings. Among staff, we just called it "Forbes training."
The framework obsesses over micro-behaviors:
- Smile within the first few seconds of contact
- Use the guest's name at least twice
- Offer a warm farewell
- Never point - always escort
- Anticipate needs before the guest asks
That last one is the most important: anticipatory service. Knowing what someone needs before they realize they need it.
Here's what that looks like in practice: A guest mentions they have an early flight tomorrow. A decent front desk agent says "okay, have a safe trip." A Forbes-trained agent hears "early flight" and extrapolates the full workflow: offer a wake-up call, confirm breakfast timing, arrange car service, have the bill ready at checkout. All without being asked for each step individually.
This isn't about being fancy. It's about removing friction. The guest shouldn't have to think through the details of their departure logistics.
The training rewired my brain. I was constantly contextualizing, forecasting, memorizing, and thinking actively. I had to always be on my game. Providing exceptional anticipatory service meant better reviews on TripAdvisor, more referrals from travel agents, and a higher likelihood that corporate bookers would choose us over competitors.
Where AI Falls Short
I recently tried a new virtual assistant. It's designed to look and act like a real person and is connected to my email, calendar, and other productivity tools.
I ask: "What's on my calendar for the rest of the day?"
It responds: "Nothing - your day is clear."
Except I have an important afternoon call scheduled. I can see it in my actual calendar.
I call it out. The assistant apologizes and says it will refresh its memory. Then it sits in silence. I wait. Is it working? Is it done? Eventually I have to ask if it's finished updating. It confirms that yes, it's done now.
Three points of friction where there should be zero.
One mistake created permanent supervision overhead. I now can't trust this assistant. Before every answer it gives me, I'm checking my actual calendar to verify. I'm no longer being assisted, I'm managing the tool.
This is the pattern with many AI tools. Too much handholding. Too much manual prompting. Too much oversight.
I tell Claude I'm presenting to a client tomorrow. It helps me refine the deck, but then it waits. A front desk agent hears "early flight" and immediately thinks: wake-up call, breakfast, car service, checkout. The full sequence. Claude hears "client presentation" and stops at the deck.
The friction isn't that AI can't do more, it's that it has to be told. It asks permission for obvious next steps instead of just taking them. Or it announces it's updating memory, then goes silent, leaving you wondering if it's working or broken.
Even when AI has the access it needs, it often won't use it proactively. I ask: "Did Sarah ever respond about that proposal?" The AI has my email. It could check. Instead: "I don't have that information." So I go look myself.
AI tools with memory features hiccup constantly. They forget key information. They remember your preferences one day and ignore them the next. You've told the system three times that you're based in NYC and prefer morning meetings, but next week it's asking what timezone you're in and suggesting 4pm slots.
When a Five-Star concierge messes up your dinner reservation, you start questioning everything else. Same with AI. One missed calendar entry and you're double-checking every output. At that point, what are you even using the assistant for? You're doing all the remembering, all the connecting, all the prompting. That's not assistance. That's management overhead.
Here's what the Forbes standards tell us we should be aiming for: AI that anticipates the full workflow. AI that proactively uses the access it has. AI that offers the obvious next step without being prompted. AI that completes tasks without abandoning you mid-process.
AI for the Everyday Person
Most hotel guests don't even know that the Forbes standards exist, they're not meant to. You're not supposed to walk into a Five-Star hotel with a checklist, counting how many times the front desk agent uses your name or timing how quickly they smile. The standards aren't there to create explicit expectations for guests, they're there to subtly guide your experience.
You just show up and the service adapts to you. You don't need to know the protocols or phrase your requests a certain way. The Forbes framework works precisely because it operates invisibly.
That's the bar for AI. Not creating tools that work well for people who understand AI, but tools that work for anyone who opens a prompt screen. No special techniques required. No learning curve for how to talk to AI properly.
Infrastructure like the Model Context Protocol is helping. It lets AI reliably connect to calendars, email, project management tools. But access isn't enough. Just like a front desk agent needs judgment about when to check the reservation system, AI needs to know when to proactively use the information it has.
We're Getting There
We're moving in the right direction. AI agents - autonomous systems that can execute multi-step workflows - are a real step forward. The technology exists for better anticipatory service.
But "agents" is still deeply B2B vocabulary. It's not mainstream reality yet. The everyday person isn't using agentic solutions to automate their personal life. They're using ChatGPT, Claude, Gemini. Chat interfaces that still require significant prompting and hand-holding, and in many cases, can't perform many of the actions we need them to.
The gap isn't in what's possible, it's in what people actually have access to through the tools they're using day-to-day. And those tools need to pass the Forbes standards. Not just the enterprise solutions.
The Standard
This essay isn't about dismissing AI. It's about establishing a clear standard so we know what to look for and what to leave behind.
Forbes training works because quality has consequences. Exceptional service creates better reviews, more referrals, higher likelihood that guests choose you over competitors. Get the experience right and guests return. Miss the mark and they find another hotel.
The same is true for AI. We'll adopt the tools that anticipate our needs and operate invisibly. We'll abandon the ones that create friction and erode trust. The market will reward AI that meets the Forbes standards, not because users are explicitly demanding it, but because exceptional service drives adoption, retention, and referrals.
The reasonable objection: hotels operate in controlled environments with predictable patterns. A front desk agent can't accidentally delete your files or send an email to the wrong person. AI can. The stakes are different. Maybe the current friction, asking for permission and waiting for prompts, isn't a downside but a safety feature. Maybe it's responsible design until we solve for reliable autonomy.
But here's what that argument misses: most of the friction we're experiencing isn't about high-stakes decisions. It's about low-stakes execution failures. Forgotten context. Incomplete handoffs. Failing to check email when it has access. Asking what timezone you're in when it already knows.
There's a massive gap between asking permission to check your calendar and autonomously making financial decisions. AI should operate more proactively. A front desk agent doesn't ask permission to check if a restaurant is open - they just check. The same should apply to AI checking your calendar or reviewing an email thread.
The standard should be: assistance that feels effortless because all the work is invisible. No cognitive load. No decision fatigue. No supervision overhead.
The Forbes framework gives us an anchor point, not for critiquing what AI can't do yet, but for articulating what we're building toward. It helps us evaluate which tools are worth using, verbalize what's frustrating us, and hold builders to a higher bar.
Current limitations aren't permanent. But that doesn't mean we should passively accept them. We should expect AI to meet the same standard we expect from human service: anticipatory, personalized, consistent, and invisible.