Page last updated:
This topic provides an overview of the structure and components of Diego, the new container management system for Cloud Foundry.
Cloud Foundry has used two architectures for managing application containers: Droplet Execution Agents (DEA) and Diego. With the DEA architecture, the Cloud Controller schedules and manages applications on the DEA nodes. In the newer Diego architecture, Diego components replace the DEAs and the Health Manager (HM9000), and assume application scheduling and management responsibility from the Cloud Controller.
Refer to the following diagram and descriptions for information about the way Diego handles application requests.
View a larger version of this image at the Diego Design Notes repo.
The Cloud Controller passes requests to stage and run applications to the Cloud Controller Bridge (CC-Bridge).
The BBS tracks desired LRPs, running LRP instances, and in-flight Tasks. It also periodically analyzes this information and corrects discrepancies to ensure consistency between
Components in the Diego core run and monitor Tasks and LRPs. The core consists of the following major areas:
Diego Brain components distribute Tasks and LRPs to Diego Cells, and correct discrepancies between
DesiredLRP counts to ensure fault-tolerance and long-term consistency. The Diego Brain consists of the Auctioneer.
Uses the auction package to run Diego Auctions for Tasks and LRPs
Communicates with Cell Reps over SSL/TLS
Maintains a lock in the BBS that restricts auctions to one Auctioneer at a time
Refer to the Auctioneer repo on GitHub for more information.
Diego Cell components manage and maintain Tasks and LRPs.
Represents a Cell in Diego Auctions for Tasks and LRPs
Mediates all communication between the Cell and the BBS
Ensures synchronization between the set of Tasks and LRPs in the BBS with the containers present on the Cell
Maintains the presence of the Cell in the BBS
Runs Tasks and LRPs by asking the in-process Executor to create a container and
Refer to the Rep repo on GitHub for more information.
Runs as a logical process inside the Rep
Implements the generic Executor actions detailed in the API documentation
STDERRto the Metron agent running on the Cell
Refer to the Executor repo on GitHub for more information.
Provides a platform-independent server and clients to manage Garden containers
Defines the Garden-runC interface for container implementation
Forwards application logs, errors, and application and Diego metrics to the Loggregator Doppler component
Refer to the Metron repo on GitHub for more information.
Maintains a real-time representation of the state of the Diego cluster, including all desired LRPs, running LRP instances, and in-flight Tasks
Ensure consistency and fault tolerance for Tasks and LRPs by comparing desired state (stored in the database) with actual state (from running instances)
Acts to keep
ActualLRPcount synchronized in the following ways:
- If the
DesiredLRPcount exceeds the
ActualLRPcount, requests a start auction from the Auctioneer
- If the
ActualLRPcount exceeds the
DesiredLRPcount, sends a stop message to the Rep on the Cell hosting an instance
- If the
Monitors for potentially missed messages, resending them if necessary
Refer to the Bulletin Board System repo on GitHub for more information.
- Provides a consistent key-value data store to Diego
- This “blobstore” serves static assets that can include general-purpose App Lifecycle binaries and application-specific droplets and build artifacts.
Refer to the File Server repo on GitHub for more information.
- Brokers connections between SSH clients and SSH servers running inside instance containers
Provides dynamic service registration and load balancing through DNS resolution
Provides a consistent key-value store for maintenance of distributed locks and component presence
Refer to the Consul repo on GitHub for more information.
The Diego BBS stores data in MySQL. Diego uses the Go MySQL Driver to communicate with MySQL.
Refer to the Go MySQL Driver repo on GitHub for more information.
The Cloud Controller Bridge (CC-Bridge) components translate app-specific requests from the Cloud Controller to the BBS. These components include the following:
Translates staging requests from the Cloud Controller into generic Tasks and LRPs
Sends a response to the Cloud Controller when a Task completes
Refer to the Stager repo on GitHub for more information.
Mediates uploads from the Executor to the Cloud Controller
Translates simple HTTP POST requests from the Executor into complex multipart-form uploads for the Cloud Controller
Refer to the CC-Uploader repo on GitHub for more information.
Listens for app requests to update the
DesiredLRPscount and updates
DesiredLRPsthrough the BBS
Periodically polls the Cloud Controller for each app to ensure that Diego maintains accurate
Refer to the Nsync repo on GitHub for more information.
Provides the Cloud Controller with information about currently running LRPs to respond to
cf app APP_NAMErequests
ActualLRPactivity for crashes and reports them the Cloud Controller
Refer to the TPS repo on GitHub for more information.
The following three platform-specific binaries deploy applications and govern their lifecycle:
The Builder, which stages a CF application. The CC-Bridge runs the Builder as a Task on every staging request. The Builder performs static analysis on the application code and does any necessary pre-processing before the application is first run.
The Launcher, which runs a CF application. The CC-Bridge sets the Launcher as the Action on the
DesiredLRPfor the application. The Launcher executes the start command with the correct system context, including working directory and environment variables.
The Healthcheck, which performs a status check on running CF application from inside the container. The CC-Bridge sets the Healthcheck as the Monitor action on the
DesiredLRPfor the application.
Buildpack App Lifecycle implements the Cloud Foundry buildpack-based deployment strategy.
Docker App Lifecycle implements a Docker deployment strategy.
ActualLRPstates, emitting route registration and unregistration messages to the Cloud Foundry router when it detects changes
Periodically emits the entire routing table to the Cloud Foundry router
Refer to the Route-Emitter repo on GitHub for more information.