Placement Documentation

Categories:

Recommended

The placement API service was introduced in the 14.0.0 Newton release within the nova repository and extracted to the placement repository in the 19.0.0 Stein release. This is a REST API stack and data model used to track resource provider inventories and usages, along with different classes of resources. For example, a resource provider can be a compute node, a shared storage pool, or an IP allocation pool. The placement service tracks the inventory and usage of each provider. For example, an instance created on a compute node may be a consumer of resources such as RAM and CPU from a compute node resource provider, disk from an external shared storage pool resource provider and IP addresses from an external IP pool resource provider.

The types of resources consumed are tracked as classes. The service provides a set of standard resource classes (for example DISK_GB, MEMORY_MB, and VCPU) and provides the ability to define custom resource classes as needed.

Each resource provider may also have a set of traits which describe qualitative aspects of the resource provider. Traits describe an aspect of a resource provider that cannot itself be consumed but a workload may wish to specify. For example, available disk may be solid state drives (SSD).

Chapter One – Usages

1.1 Placement Usage

1.1.1 Tracking Resources 

The placement service enables other projects to track their own resources. Those projects can register/delete their own resources to/from placement via the placement HTTP API.

The placement service originated in the Nova project. As a result much of the functionality in placement was driven by nova’s requirements. However, that functionality was designed to be sufficiently generic to be used by any service that needs to manage the selection and consumption of resources.

How Nova Uses Placement

Two processes, nova-compute and nova-scheduler, host most of nova’s interaction with placement.

The nova resource tracker in nova-compute is responsible for creating the resource provider record corresponding to the compute host on which the resource tracker runs, setting the inventory that describes the quantitative resources that are available for workloads to consume (e.g., VCPU), and setting the traits that describe qualitative aspects of the resources (e.g., STORAGE_DISK_SSD).

If other projects — for example, Neutron or Cyborg — wish to manage resources on a compute host, they should create resource providers as children of the compute host provider and register their own managed resources as inventory on those child providers. For more information, see the Modeling with Provider Trees.

The nova-scheduler is responsible for selecting a set of suitable destination hosts for a workload. It begins by formulating a request to placement for a list of allocation candidates. That request expresses quantitative and qualitative requirements, membership in aggregates, and in more complex cases, the topology of related resources. That list is reduced and ordered by filters and weighers within the sched- uler process. An allocation is made against a resource provider representing a destination, consuming a portion of the inventory set by the resource tracker.

Modeling with Provider Trees

Overview

Placement supports modeling a hierarchical relationship between different resource providers. While a parent provider can have multiple child providers, a child provider can belong to only one parent provider. Therefore, the whole architecture can be considered as a “tree” structure, and the resource provider on top of the “tree” is called a “root provider”. (See the Nested Resource Providers spec for details.)

Modeling the relationship is done by specifying a parent provider via the POST /resource_providers operation when creating a resource provider.

Note: If the parent provider hasn’t been set, you can also parent a resource provider after the creation via the PUT /resource_providers/{uuid} operation. But re-parenting a resource provider is not supported.

The resource providers in a tree — and sharing providers as described in the next section — can be returned in a single allocation request in the response of the GET /allocation_candidates operation.

This means that the placement service looks up a resource provider tree in which resource providers can collectively contain all of the requested resources. This document describes some case studies to explain how sharing providers, aggregates, and traits work if provider trees are involved in the GET /allocation_candidates operation.

Category:

Attribution

OpenStack Foundation (2022), Placement Documentation, URL: https://docs.openstack.org/zed/admin/

This work is licensed under Creative Commons Attribution 3.0 License  (https://creativecommons.org/licenses/by/3.0/).

VP Flipbook Maker

Shat your work as a flipbook by VP Online Flipbook Maker! It is a professional tool supports conversion and creation. Try it now!