Openstack nova setup




















In the [placement] section, configure access to the Placement service:. Comment out or remove any other options in the [placement] section. Populate the nova-api database:. Register the cell0 database:. Create the cell1 cell:. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3. See all OpenStack Legal Documents. Toggle navigation. In the above example, is the number of placement groups. On the OpenStack Glance node install the python-rbd package:.

Copying the Ceph configuration file to the nova-compute , cinder-backup , cinder-volume , and glance-api nodes. Add the keyrings for client. OpenStack Nova nodes need the keyring file for the nova-compute process:. The OpenStack Nova nodes also need to store the secret key of the client.

The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Some workloads have very demanding requirements for memory access latency or bandwidth that exceed the memory bandwidth available from a single NUMA node. This allows for asymmetric allocation of vCPUs and memory, which can be important for some workloads. N , refer to the NUMA topology guide. Hyper-V does not support CPU pinning. This allows for features like overcommitting of CPUs.

In heavily contended systems, this provides optimal system performance at the expense of performance and latency for individual instances. Some workloads require real-time or near real-time behavior, which is not possible with the latency introduced by the default CPU policy. This process is known as pinning. No instance with pinned CPUs can use the CPUs of another pinned instance, thus preventing resource contention between instances.

To force this, run:. Host aggregates should be used to separate pinned instances from unpinned instances as the latter will not respect the resourcing requirements of the former. When running workloads on SMT hosts, it is important to be aware of the impact that thread siblings can have. Thread siblings share a number of components and contention on these components can impact performance. To configure how to use threads, a CPU thread policy should be specified. For workloads where sharing benefits performance, use thread siblings.

For other workloads where performance is impacted by contention for resources, use non-thread siblings or non-SMT hosts. Finally, for workloads where performance is minimally impacted, use thread siblings if available. This is the default, but it can be set explicitly:. Applications are frequently packaged as images. For applications that require real-time or near real-time behavior, configure image metadata to ensure created instances are always pinned regardless of flavor.

To configure an image to use pinned vCPUs and avoid thread siblings, run:. If the flavor specifies a CPU policy of dedicated then that policy will be used. If the flavor explicitly specifies a CPU policy of shared and the image specifies no policy or a policy of shared then the shared policy will be used, but if the image specifies a policy of dedicated an exception will be raised.

By setting a shared policy through flavor extra-specs, administrators can prevent users configuring CPU policies in images and impacting resource utilization. To configure this policy, run:. If the flavor does not specify a CPU thread policy then the CPU thread policy specified by the image if any will be used.



0コメント

  • 1000 / 1000