Before Amazon built its own delivery network, Amazon package deliveries were completed by partner third-party carriers, like the U.S. Postal Service or UPS. Determining the most effective and frugal way to guarantee on-time delivery was simply a matter of comparing carrier rates.
But as customer demand for faster delivery grew, so did the need for more delivery options, and the Amazon Delivery Service Partner network was born. Amazon Delivery Service Partners are the final step in the complex logistics network that mediates between fulfillment centers (FCs), where inventory is stored, and customers’ doorsteps.
After leaving the FCs, packages move to sort centers, which aggregate shipments from multiple FCs, and then to delivery stations, where vans are loaded for “last mile” delivery to customers. Connecting all these different facilities are “lanes”, both air and ground. Optimizing first-party delivery means not only computing routes through this network but provisioning it — staffing fulfillment centers, procuring trucks, and the like — in advance, on the basis of demand forecasts.
Next week, at the annual meeting of the Institute for Operations Research and the Management Sciences (INFORMS), Andrea Qualizza, a senior principal scientist in Amazon’s Supply Chain Optimization Technologies (SCOT) organization, will present a talk titled “Fulfillment planning and execution in a first party network”, which introduces Amazon’s approach to solving this complex optimization problem.
That approach is the result of a multiyear effort coordinated by dozens of the company’s senior scientists, including Russell Allgor, a vice president and chief scientist, and Granville Paules, a senior principal scientist, both in Amazon’s Global Delivery organization; Qualizza; and Tim Tien, a senior principal technologist, and Narayan Venkatasubramanyan, a senior principal scientist, both of SCOT.
“Back when Amazon relied entirely on third-party logistics, the responsibility ended with the injection of every shipment into one of the many hubs of the third-party carriers in a timely fashion,” Venkatasubramanyan explains. “Decisions on how and when to fulfill each demand were entirely based upon contractually agreed rate cards that specified costs at the shipment level.”
As the network evolves, Venkatasubramanyan says, “the first-party regime poses two new challenges, one relatively obvious but hard, the other not so obvious and harder. The first challenge relates to building, provisioning, and staffing the outbound network based on a forecast of customer demand. The second is how to use the outbound network we have built to respond to actual customer demand, which includes making the fastest offers to customers and fulfilling the resulting promises efficiently and reliably.”
The fulfillment plan
“The first step in optimizing first-party delivery,” Qualizza says, “is the development of a fulfillment plan. This requires solving a sequence of optimization models that take into account demand forecasts, the cost of acquiring transportation resources, and staffing for our outbound network.”
“As we become more certain of our needs and our capability to acquire certain resources, our plans need to adjust accordingly,” Venkatasubramanyan adds. “With the increasing reliance on a first-party network, we now have to plan for the right amount of capacity in the outbound network — including the buffer capacity needed to cover for the uncertainty in demand. That’s analogous to how we manage the safety stock of inventory we carry in our FCs.
“That brings up another consideration, which is that our plans can’t be too different from what we said yesterday. At times, that can post a challenge in the planning process, but it allows us to build a network that becomes comfortable with processes and ultimately more efficient than having constant change.”
The second step in optimizing first-party delivery is execution. Once a plan is in place, Qualizza explains, it needs to be “communicated to execution systems that in turn decide how we make delivery promises to our customers, what items go in what box, which warehouse handles it, and which path each box will take to get to the customer.”
That communication, Qualizza explains, happens at two levels. At a high level, the plan specifies origin-destination targets — that is, what percentage of the demand in a given locality should be met by a given fulfillment center.
“For example, within a zip code, how much of the demand do we plan to fulfill from warehouse one, warehouse two, warehouse three?” Qualizza says. “Likely, warehouses nearby in your region will aim to fulfill most of your demand. That depends, of course, on the network topology, connectivity, demand, and so on, which are accounted for during planning.”
At a lower level, the plan specifies time-based resource-level trajectories, or the assignment of fulfillment responsibilities to particular facilities at particular times. These trajectories can, for instance, help preserve resources that may be uniquely positioned to serve faster demand.
“The resources we’re talking about here are, for instance, what we call a lane, which is a way to get from a node to another node,” Qualizza says. “Nodes could be warehouses, sort centers, air hubs, or delivery stations. The lane typically consists of one or more trucks, scheduled over time. It could be one truck at the end of the day. It could be three trucks scattered throughout the day. Trajectories are meant to assign the right volume to actually fill the truck in a way that the truck does not go out empty, nor do we leave a handful of packages on the dock.”
Targets and trajectories are computed jointly during the planning phase and are consistent with one another. They are the main point of contact between planning and fulfillment execution.
Executing a fulfillment plan is the job of individual facilities within the delivery network. The resourcing decisions at those facilities have already been made based on demand forecasts; the greater the uncertainty of those forecasts, the more buffering the resource provision requires.
“We are going to have online controllers that make decisions in an online fashion — as demand materializes, or as page views materialize, or as packages approach the SLAM [scan, label, apply, manifest] line, which is responsible for applying a label to a package within a few hundreds of milliseconds,” Qualizza explains. “The online controllers operate under strict time latencies and, as a consequence, operate with a limited set of information when making decisions.
“To aid online controllers’ decisions, input steering signals direct work toward resources that are underutilized with respect to the plan, as opposed to resources where we are above the plan.”
Even with this additional steering mechanism, online controllers make decisions one at a time, which could lead to inefficiencies. A decision made at a particular point in time — say, where to fulfill a particular demand or how to route a particular shipment — might later turn out to be suboptimal. To identify and resolve these inefficiencies, the SCOT researchers pair online controllers with offline controllers that can re-evaluate all demand at once and revise demand decisions whose processing has not yet begun.
“If you think of this whole thing as analogous to a jigsaw puzzle, the plan is like the picture on the box,” Venkatasubramanyan says. “When customers are placing orders, they’re giving you one piece of the puzzle at a time. They are not handing you the pieces in the right order. In some cases, they’re not even handing you the right pieces. And what we’re trying to do is re-create that picture, because everyone else has relied on it — the people who wrote contracts on trucks, the people who hired associates three weeks in advance based on the picture, et cetera. The whole idea is to create a plan and get everyone to pull in the same direction.”