Please find attached the material discussed today for your reference.
From: main@... <main@...>
On Behalf Of Andrew Wilkinson
The approach of a Specification within a given Blueprint to fully define a POD aims to allow the example you mentioned i.e. the addition of a server without changing the blueprint.
One way or another though the POD is different and needs to be captured somehow.
Let’s discuss in the working group later
I like to comment on the example “5G use case”, suggesting multiple deployments (thus multiple blueprints) depending on number of servers etc.
I must say I don’t love the implied idea of LCM/ 5G capacity management process having to switch between blueprints when adding a server.
I rather would like to see one blueprint for the 5G use case for 5G Core Network “Cloud” deployment where the blueprint specification contains constraints regarding the “Cloud”-type, for example a constraint restricting the “Cloud” to the case where the “Cloud” is a single server edge cloud.
So in essence, this contribution is proposing to consider the unification of the example with 8 deployments (and “Blueprints”) into a single Blueprint, in order to avoid “Blueprint fragmentation with partially overlapping fragments” from the outset.
>>Responses below – this gets to the key definition what a “Blueprint” is and how it can be tailored and evolve.
In last Akraino developer conference and break-out session, some clarity started to emerge. But.. several questions remain on the granularity of blueprint. My intention of this email is to start the conversation by asking some high level questions.
For every use case, there would be set of deployments.
For each deployment, there would be blueprints.
For a given deployment, intention is to have small number of blueprints.
For instance, in case of 5G use case, the deployments can be as follows:
One blueprint in each of above deployments is expected to satisfy 5G use case.
For each deployment, intention is to have as minimum number of blueprints to choose from.
>>Yes but they have to be appropriate for the organization deploying them. My small set may be different to your small set.
>>They have to be customizable to be relevant to a wide audience
For example, for Multi-Server edge cloud deployment, following blueprints are one possibility:
There are few questions raised and we are wondering whether there is a need to have modularity in the blue prints.
>>One proposal that I’ve started to sketch in the charter doc is the concept of a Blueprint Specification that accompanies a more generic Blueprint
>>The Specification would precisely and declaratively define the HW and SW in a POD and the LCM approach to a deployment of the POD described by that Blueprint + Blueprint Specification.
>>The Blueprint Specification would need to be layered (e.g., HW, networking, virtualization layers etc etc) and would allow one to do exactly what you described above (Openstack-x86HW-Ubuntu-
NoSDNCtrl-v1 OR K8S-x86HW-Ubuntu-OVN-SRIOVCtrl-v1 etc in the same Blueprint)
>>Subsequent releases of Akraino for a given Blueprint would add options that could be chosen to form the Blueprint’s Specification
>>The individual functions/plugins/HW in a precise Blueprint’s Specification would be selected from the Blueprint Specification Template.
>>The Blueprint Specification Template would contain all the options supported for a given Blueprint at a given Akraino release.
>>We have to support different OS versions and distros in different PODs for sure but should one POD support a mix in the (single) POD?
>>Technically it could but I feel it’ll make the definition and management extremely complex
>>But in forming the definition of a blueprint we should consider it for future as we may then want to structure that definition to support in future multiple options of the same component.
>>I don’t think so if an SDN controller is a plugin option within the Blueprint’s Specification. If not then yes.
>>I could deploy openstack in a Network Cloud Blueprint by selecting different controllers from the set of controller options (e.g. neutron without a controller, ODL or TitaniumFabric controller etc – assuming all were available and tested in a given Akraino release of the ‘Network Cloud’ Blueprint)
>>I wouldn’t assume so by default
>>Let’s say for example a blueprint + spec require L2 only between deployed HW and used only MAC learning – changing the fabric switches out doesn’t have to change the deployment of the POD if the two switches have the same capability.
>>The key question is is the switch fabric managed by the LCM and deployment tools of the Blueprint? If so then you’d need to at least have an option for the different switches in the Blueprint Specification (or have a different Blueprint)
>>Again don’t think so – this will explode the number of Blueprints
>>A different selection of functionalities from those support in a given the Blueprint Specification Template for a given Akraino release would allow one to deploy different SW within the same Blueprint
>>BUT the same question arises about mixing in the same POD deployed by a given Blueprint + Specification as you raised for the OS mixing
>>Think we need more discussion here
>>I’d see this as a change to the blueprint specification not the blueprint itself – i.e. one selects a different OS SW “plugin’ from the template
Just few questions for discussions J