In June 2010, the NGA floated the idea through NCOIC of using a community cloud to support international disaster response efforts. This came on the heels of the Haiti Earthquake disaster in which response efforts were severely hampered by the inability to securely share information in a flexible, on-demand manner. From the NGA description of the desired capabilities, it was clear that the Virtual Organization (VO) concept from the grid computing arena was exactly what they were asking for.
The idea of implementing a VO management system (VOMS) in an open source cloud was communicated back through NCOIC to the NGA. Two years passed before the funding was lined-up and NCOIC and their members were finally on contract in October 2012. Work commenced to implement a simple VOMS using the OpenStack open-source cloud software stack. This involved augmenting the OpenStack security and identity service (Keystone) to use another MongoDB to manage VO information when a user was authenticating to a VO domain. The OpenStack storage service (Swift) was also modified to understand how to apply policy to the VO attributes granted to federation members.
While this approach worked, it was actually very insecure and hard-coded for one function. The communication with the MongoDB server was not protected, and the only federated function that could be managed was reading and writing to Swift storage containers. Half-way through this project, after gaining a much better understanding of how Keystone worked, it finally dawned on personnel how a much more general federation capability could have been built.
Nonetheless, the initial VOMS implementation was completed and demonstrated as part of the final outbrief in September, 2013, in Tysons Corner VA. Over 100 persons from across government and industry attended with great interest. Afterwards there were several offers of follow-on presentations and discussions. Then in October 2013, the federal government shut-down over political disputes. All of the positive momentum was lost.
Fortunately internal IR&D funding was obtained at The Aerospace Corporation starting in FY13. (IR&D projects at Aerospace are traditionally funded for three years.) The KeyVOMS IR&D project leveraged the object model introduced in Keystone V3 to implement and manage VOs. The Keystone domain "owned" users and projects. Services registered in the Keystone Service Catalog could be associated with projects, as could users. Whenever a user authenticated to a Keystone domain and project, endpoint filtering was used to return a filtered service catalog of other those services the user was authorized to use, as part of an encrypted authorization token. By deploying a stand-alone Keystone V3 service, service endpoints for services from different clouds could be registered and associated with different domains and projects. Hence, a Keystone V3 domain was tantamount to a VO. This stand-alone Keystone server was the KeyVOMS server. Only the Keystone policy file had to be changed to define three additional roles: VOMS_Admin, VO_Admin and VO_Site_Admin.
The other main development was to create a VO Policy Enforcement Point. OpenStack services were built using the Web Service Gateway Interface (WSGI). KeyVOMS-enabled services followed suit. WSGI is a pipeline construct whereby incoming web requests can be massaged and modified at different stages in the pipeline. The VO PEP pipeline stage knew how to validate authorization tokens with KeyVOMS. Hence, any federation member with the proper authorization token could access a federation service that was being made available to the federation by its "owner", regardless of where or on which cloud the service was hosted.
Three different KeyVOMS-enabled services were built for demonstration purposes: an RSS feed, a map data/icon server, and a file server. Multiple instances of these services were stood-up in two different clouds: AWS and PDNS. Different demo VOs were stood-up wherein different VO members could discover and access different VO services at either cloud, based on their VO authorizations. Multiple conference presentations were given and papers published.
In August 2017, NIST and IEEE started a joint working group on federated clouds. The idea was for NIST to define a Federated Cloud Reference Architecture wherein all of the fundamental federation actors and their functions were described. This will essentially define federation for the US government. This reference architecture could then be used by IEEE to identify areas of possible and needed federation-specific standards, and to take them through the international standardization process.
Concomitantly to this, the Open Geospatial Consortium started their Testbed-14 process that included several tasks to demonstrate very specific, narrow federation-related functions. One of the TB-14 deliverables was the Federated Cloud Engineering Report (OGC 18-090r1). This document used the NIST Federated Cloud Reference Architecture as a yardstick to evaluate other existing federation-related systems, tools, and standards, in addition to evaluating the other federation tasks. Once approved, this document will be released to the public, but perhaps more importantly will go to the OGC TB-14 sponsors, which includes the NGA, USGS, NOAA, FAA, the European Space Agency, Ordnance Survey in the UK, and others.
As a NIST public working group, the draft reference architecture document is publicly available. A finished version should be put in a public comment period in December 2018, or shortly thereafter. The OGC Engineering Report should be made public after its final internal review and approval, also in the December 2018 timeframe, pending any requested changes.
These technical developments and documentation concerning cloud federation all have a direct lineage to the environment and collaborative projects created by NCOIC and its membership. The continuing impact that cloud federation will have can be ultimately attributed to NCOIC.
Dr. Craig A. Lee, Senior Scientist, The Aerospace Corporation
December 6, 2018