Ceilometer project is for monitoring and collecting data regarding to resource usage in the cloud infrastructure.
Mission: Ceilometer is to provide an infrastructure to collect any information needed regarding OpenStack projects. It was designed so that rating engines could use this single source to transform events into billable items which we label as “metering”. As the project started to come to life, collecting an increasing number of meters across multiple projects, the OpenStack community started to realize that a secondary goal could be added to Ceilometer: become a standard way to meter, regardless of the purpose of the collection. This data can then be pushed to any set of targets using provided publishers mentioned in Publishers section.
If you divide a billing process into a 3 step process, as is commonly done in the telco industry, the steps are:
Ceilometer’s initial goal was, and still is, strictly limited to step one 'Metering'. This is a choice made from the beginning not to go into rating or billing, as the variety of possibilities seemed too large for the project to ever deliver a solution that would fit everyone’s needs, from private to public clouds. Ceilometer is a component of the Telemetry project. Its data can be used to provide customer billing, resource tracking, and alarming capabilities across all OpenStack core components.
Previously, ceilometer used mysql as for backend storage. Now, it is passed off to another Application Collaboration called gnocchi that does the backend job using mongodb. Ceilometer also has a Command Line Interface (CLI).
Each of Ceilometer’s services are designed to scale horizontally. Additional workers and nodes can be added depending on the expected load. Ceilometer offers two core services:
polling agent - daemon designed to poll OpenStack services and build Meters. Polling agents, which is the less preferred method, will poll some API or other tool to collect information at a regular interval. The polling approach is less preferred due to the load it can impose on the API services.
notification agent - daemon designed to listen to notifications on message queue, convert them to Events and Samples, and apply pipeline actions. The Notification agent which takes messages generated on the notification bus and transforms them into Ceilometer samples or events. This is the preferred method of data collection. If you are working on some OpenStack related project and are using the Oslo library, you are kindly invited to come and talk to one of the project members to learn how you could quickly add instrumentation for your project. The heart of the system is the notification daemon (agent-notification) which monitors the message queue for data sent by other OpenStack components such as Nova, Glance, Cinder, Neutron, Swift, Keystone, and Heat, as well as Ceilometer internal communication.
This file is shared under the GNU General Public License (GPL). You are free to download it and use it as you like. You can either make use of your own Archmate Tool and download it in .archmate format, or use your own tool that support the Open Groups open Exchange File format. Both files are compressed into a .zip file.
Mission: Heat as the OpenStack Orchestration program is to create a human- and machine-accessible service for managing the entire lifecycle of infrastructure and applications within OpenStack clouds. Why Heat? It makes Clouds Rise!
Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also endeavours to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
A Heat template describes the infrastructure for a cloud application in a text file that is readable and writable by humans, and can be checked into version control, diffed, etc. Infrastructure resources that can be described include: servers, floating ips, volumes, security groups, users, etc.
Heat also provides an autoscaling service that integrates with Telemetry, so you can include a scaling group as a resource in a template.
Templates can also specify the relationships between resources (e.g. this volume is connected to this server). This enables Heat to call out to the OpenStack APIs to create all of your infrastructure in the correct order to completely launch your application.
Heat manages the whole lifecycle of the application - when you need to change your infrastructure, simply modify the template and use it to update your existing stack. Heat knows how to make the necessary changes. It will delete all of the resources when you are finished with the application, too.
Heat primarily manages infrastructure, but the templates integrate well with software configuration management tools such as Puppet and Chef. The Heat team is working on providing even better integration between infrastructure and software.
Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization by implementing OpenStack’s Identity API. It supports LDAP, OAuth, OpenID Connect, SAML and SQL. It offeres Role Based Access Control (RBAC).
Keystone provides a single point of integration for OpenStack policy, catalog, token and authentication. Keystone handles API requests as well as providing configurable keystone (service & admin APIs) catalog, policy, token and identity services. Standard backends include LDAP or SQL, as well as Key Value OpenStack token backend, catalog backend, policy backend and identity backend stores (KVS). Identity Service Most people will use this as a point of customization for their current authentication services.
The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data with a simple API. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.
Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.
Swift = Object Storage Structural Logical Architecture
The Object Storage system organizes data in a hierarchy, as follows:
Account. Represents the top-level of the hierarchy. Your service provider creates your account and you own all resources in that account. The account defines a namespace for containers. A container might have the same name in two different accounts. In the OpenStack environment, account is synonymous with a project or tenant.
Container. Defines a namespace for objects. An object with the same name in two different containers represents two different objects. You can create any number of containers within an account. In addition to containing objects, you can also use the container to control access to objects by using an access control list (ACL). You cannot store an ACL with individual objects. In addition, you configure and control many other features, such as object versioning, at the container level. You can bulk-delete up to 10,000 containers in a single request. You can set a storage policy on a container with predefined names and definitions from your cloud provider.
Object. Stores data content, such as documents, images, and so on. You can also store custom metadata with an object.
Swift = Object Storage Behavioural Services Architecture
Mission: The Image service (glance) provides a service where users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata definitions.
What is a virtual machine image? A virtual machine image is a single file which contains a virtual disk that has a bootable operating system installed on it. Virtual machine images come in different formats: Raw, AKI, AMI, ARI, ISO, OVF, QCOW2, UEC Tarball, VDI, VHD, VHDX, VMDK.
Glance image services include discovering, registering, retrieving and format converting virtual machine (VM) images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.
Glance = Images Structural Logical Architecture
Glance image services include discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple filesystems to object-storage systems like the OpenStack Swift project.
Mission: To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional back-end block storage devices.
The block storage system manages the creation, attaching and detaching of the block devices to servers. Block storage volumes are fully integrated into OpenStack Compute and the Dashboard allowing for cloud users to manage their own storage needs. In addition to local Linux server storage, it can use storage platforms including Ceph, CloudByte, Coraid, EMC (VMAX and VNX), GlusterFS, IBM Storage (Storwize family, SAN Volume Controller, XIV Storage System, and GPFS), Linux LIO, NetApp, Nexenta, Scality, SolidFire and HP (StoreVirtual and StoreServ 3Par families).
Cinder virtualizes the management of block storage devices in software. Cinder provides end users with a self-service REST or CLI API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device. This is done through the use of either a reference implementation (LVM) or plugin drivers for other storage.
Cinder uses a sql-based central database that is shared by all Cinder services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, cinder will be moving towards multiple data stores with some kind of aggregation system.
The Auth Manager component responsible for users/projects/and roles. Can back-end to DB or LDAP. This is not a separate binary, but rather a python class that is used by most components in the system.
The scheduler: decides which host gets each volume.
A volume: manages dynamically attachable block devices. There can be many of these of different types each of which has a specific driver.
Mission: To implement services and associated libraries to provide massively scalable, on demand, self-service access to compute resources, including bare metal, virtual machines, and containers.
Nova = Compute Structural Logical Architecture
Nova Application Collaboration is comprised of multiple server processes, each performing different functions. The user-facing interface is a REST API, while internally Nova components communicate via an RPC message passing mechanism called the Conductor Component.
The API servers process REST requests, which typically involve database reads/writes, optionally sending RPC messages to other Nova services, and generating responses to the REST calls. RPC messaging is done via the oslo.messaging library, an abstraction on top of message queues. Most of the major nova components can be run on multiple servers, and have a manager that is listening for RPC messages. The one major exception is nova-compute, where a single process runs on the hypervisor it is managing (except when using the VMware or Ironic drivers).
Nova also uses a central database that is (logically) shared between all components.
Nova = Compute Behavioural Services Architecture
These are the REST & CLI Services offered by Nova for Compute. For each Service below there are one or many sub-services or commands that can be found here: https://developer.openstack.org/api-ref/compute (We also chose not to show all the many Deprecated services)