In 1989, Tim Berners-Lee designed the World Wide Web so scientists could share research papers by clicking links. Every technology decision since — HTML, CSS, JavaScript, responsive design — has been built on the same assumption: a person with eyes and a cursor is on the other end. That assumption is about to break.
A Brief History of Building for Eyeballs
The early web was text and hyperlinks. Mosaic introduced inline images in 1993 and gave us the point-and-click interface. Netscape created JavaScript in 1995 so pages could respond to mouse clicks and keystrokes. CSS arrived in 1996 to standardize visual presentation — fonts, colors, layouts — for human eyes. Dreamweaver pioneered WYSIWYG editing in 1997, cementing the idea that web content is fundamentally a visual medium.
Every framework that followed — React, Vue, Angular, Tailwind — extended this philosophy. Make it beautiful. Make it interactive. Make it feel right when a human uses it. None of them were designed for a software agent that has no screen, no mouse, and no concept of "scrolling down."
This matters because, for the first time, a significant share of your customers are not browsing your website themselves. They are sending an AI agent to do it for them.
What Happens When an AI Agent Visits Your Website
When an AI agent tries to interact with a website built for humans, it faces a series of problems that compound on each other.
Dynamic content is invisible. Modern websites render most content through JavaScript after the page loads. A booking calendar, a menu with prices, a list of available time slots — these elements do not exist in the raw HTML. They are assembled by the browser's JavaScript engine. An agent requesting that page gets a shell of empty containers.
Forms are designed to stop machines. CAPTCHAs exist specifically to block non-human visitors. Even reCAPTCHA v3, which runs invisibly, scores users based on behavioral patterns like mouse movement and typing cadence — patterns that AI agents fundamentally lack.
Every site is different. Date pickers, time selectors, custom dropdowns — each restaurant's booking widget uses its own proprietary JavaScript components with unique interaction models. There is no standard for how a booking form should work, which means an agent trained on one breaks on the next.
Visual cues have no machine equivalent. "Available times are shown in green." "Click and drag to select your seating area." These instructions are meaningless to an agent that cannot perceive color or spatial layout.
The Compound Failure Math
Here is the part that surprises people. Even the best AI browser agents — Claude Computer Use at 86% task completion, AutoGPT at 81% — achieve impressive accuracy on individual actions. But booking a table is not a single action. It is a sequence: find the venue, navigate to the booking page, select a date, pick a time, enter guest count, fill in contact details, submit, confirm.
At 85% accuracy per step, a ten-step flow succeeds only about 20% of the time. That is 0.85 multiplied by itself ten times. Even high per-step accuracy produces an unacceptable end-to-end failure rate when chained across a multi-step process.
This is not a problem that gets fixed by making agents smarter. It is an architectural problem. Forcing AI through human-designed interfaces is like asking someone to read a database dump — technically possible, but fundamentally the wrong approach.
"Asking an AI agent to use a website is like asking a human to read a database dump — technically possible, but architecturally wrong."
What APIs Do Differently
An API is purpose-built for machines the way a website is purpose-built for people. The differences are fundamental:
Where a website serves HTML with visual markup, an API returns JSON with structured data. Where a website requires clicking links and scrolling, an API accepts HTTP requests to defined endpoints. Where a website varies by device, session, and A/B test, an API returns the same structure every time. Where a website renders megabytes of HTML, CSS, JavaScript, and images, an API returns kilobytes of pure data.
When an AI agent calls an API, there is no scraping, no form-filling, no visual interpretation. It sends a request — "Is there a table for four at 8pm on Saturday?" — and receives a structured response it can immediately act on.
Industries That Already Figured This Out
The hospitality dining industry is not the first to face this challenge. Other sectors solved it years or even decades ago.
Travel got there first. The Global Distribution System (GDS) dates back to 1960 when American Airlines built Sabre, the first computerized reservation system. Today, Sabre, Amadeus, and Travelport control roughly 97% of the booking market and process millions of machine-to-machine transactions daily. When you ask an AI to "find me a flight from Bangkok to Tokyo," it can query structured APIs and return real-time results because the travel industry built this machine-readable layer proactively.
E-commerce made APIs the product. Stripe processed over $40 billion during Black Friday and Cyber Monday 2025 alone. Businesses on Stripe generated $1.9 trillion in total payment volume that year. Shopify's Admin and Storefront APIs power headless commerce for millions of merchants. Seven lines of code with Stripe can process a payment that used to require a multi-page checkout form.
Hospitality dining has no equivalent. There is no GDS for restaurants. No universal protocol for querying menu data, checking table availability, or executing a booking across independent venues. OpenTable, Resy, and SevenRooms each operate as closed ecosystems. The independent venues that make up the backbone of cities like Bangkok have nothing at all.
The Scramble to Retrofit Machine-Readability
The web is now rushing to bolt on machine-readable layers after the fact. Several new standards have emerged in the past two years:
llms.txt, proposed in September 2024 by Answer.AI co-founder Jeremy Howard, is a plain-text Markdown file placed at a website's root that provides AI-optimized summaries of what the site contains. Over 844,000 websites had implemented it by October 2025, including Anthropic, Cloudflare, and Stripe. It is a step forward — but it only tells AI what exists, not how to interact with it.
agents.txt goes further, specifying API endpoints, authentication methods, rate limits, and policies that an AI agent can read and automatically adapt to. This is closer to what hospitality actually needs — but it still requires the venue to have API endpoints in the first place.
These standards are informational overlays. They tell agents what exists but do not provide the transactional endpoints to actually do things like book a table. That requires real APIs.
The Dual-Stack Future
The emerging model is what some are calling the "dual-stack" web presence. Every business needs two parallel interfaces to the same underlying data:
The human layer is the traditional website — beautiful photography, atmospheric descriptions, an interactive reservation widget, social proof from reviews. Everything that makes a person feel confident about choosing your venue.
The machine layer is a set of API endpoints and structured data feeds. JSON endpoints for menu items with prices, dietary tags, and allergens. Real-time availability checks. A booking endpoint that accepts structured parameters. Schema.org markup. An OpenAPI specification. This is what AI agents consume.
Both layers access the same database. A table reserved through the website is the same table reserved through the API. The only difference is the interface — one is designed for human perception, the other for programmatic access.
The API economy is projected to reach $34.17 billion by 2032. Businesses exposing machine-readable interfaces gain access to every AI agent as a distribution channel without per-click advertising costs. In an era where AI is collapsing the traditional marketing funnel, this is not optional infrastructure. It is competitive survival.
Why Hospitality Has Lagged — and How to Catch Up
The reason restaurants have not built APIs is straightforward: they should not have to. Individual restaurants are small businesses with limited tech budgets. Unlike airlines or hotel chains, most cannot justify building and maintaining API infrastructure. The technology stack is fragmented across dozens of disconnected systems — POS, reservations, delivery aggregators, inventory, loyalty — each with its own limited or nonexistent API.
In Bangkok, the fragmentation is extreme. Many venues use LINE for reservations — messaging, not structured data. Some still use paper reservation books. International platforms like OpenTable have limited penetration in Southeast Asia. There is no dominant reservation platform, which means there is no dominant API.
The solution is not asking every restaurant to hire a developer. It is building the middleware layer — the equivalent of what GDS did for airlines — that aggregates, standardizes, and exposes venue data through machine-readable endpoints. Venues plug in through a simple management dashboard. AI agents connect through a single, consistent API. The venue gets both a human-facing web presence and a machine-facing data layer without building either from scratch.
The web was built for people sharing knowledge. The next layer of the web is being built for agents completing transactions. The venues that add that machine layer now will be the ones AI agents can find, understand, and book through. The rest will have beautiful websites that increasingly fewer customers ever see.