Re: Use cases and Blueprints

Andrew Wilkinson

Hi All,


Please find attached the material discussed today for your reference.






From: main@... <main@...> On Behalf Of Andrew Wilkinson
Sent: Thursday, September 06, 2018 6:53 AM
To: main@...
Subject: Re: [akraino] Use cases and Blueprints


The approach of a Specification within a given Blueprint to fully define a POD aims to allow the example you mentioned i.e. the addition of a server without changing the blueprint.


One way or another though the POD is different and needs to be captured somehow.


Let’s discuss in the working group later




From: main@... <main@...> On Behalf Of Reith, Lothar
Sent: Thursday, September 06, 2018 2:06 AM
To: main@...
Subject: Re: [akraino] Use cases and Blueprints




I like to comment on the example “5G use case”, suggesting multiple deployments (thus multiple blueprints) depending on number of servers etc.


I must say I don’t love the implied idea of LCM/ 5G capacity management process having to switch between blueprints when adding a server.


I rather would like to see one blueprint for the 5G use case for 5G Core Network “Cloud” deployment  where the blueprint specification contains constraints regarding the “Cloud”-type, for example a constraint restricting the “Cloud” to the case where the “Cloud” is a single server edge cloud.


So in essence, this contribution is proposing to consider the unification of the example with 8 deployments (and “Blueprints”) into a single Blueprint, in order to avoid “Blueprint fragmentation with partially overlapping fragments” from the outset.




Von: main@... [mailto:main@...] Im Auftrag von Andrew Wilkinson
Gesendet: Donnerstag, 6. September 2018 00:00
An: main@...
Betreff: Re: [akraino] Use cases and Blueprints


>>Responses below – this gets to the key definition what a “Blueprint” is and how it can be tailored and evolve.


From: main@... <main@...> On Behalf Of Srini
Sent: Wednesday, September 05, 2018 1:31 PM
To: main@...
Subject: [akraino] Use cases and Blueprints




In last Akraino developer conference and break-out session, some clarity started to emerge. But.. several questions remain on the granularity of blueprint. My intention of this email is to start the conversation by asking some high level questions.


For every use case, there would be set of deployments.

For each deployment, there would be blueprints.

For a given deployment, intention is to have small number of blueprints.


For instance, in case of 5G use case, the deployments can be as follows:


  • Core network cloud deployment
  • Multi-server edge cloud deployment
  • Single server edge cloud deployment
  • Two server edge cloud deployment
  • Headless edge deployment
  • Service Orchestration deployment
  • Regional Cloud controller deployment
  • Regional orchestration deployment



One blueprint in each of above deployments is expected to satisfy 5G use case.

For each deployment, intention is to have as minimum number of blueprints to choose from.



>>Yes but they have to be appropriate for the organization deploying them. My small set may be different to your small set.

>>They have to be customizable to be relevant to a wide audience


For example, for Multi-Server edge cloud deployment, following blueprints are one possibility:


  • Openstack-x86HW-Ubuntu-NoSDNCtrl-v1
  • K8S-x86HW-Ubuntu-OVN-SRIOVCtrl-v1


There are few questions raised and we are wondering whether there is a need to have modularity in the blue prints.


  1. A given Edge Cloud may not have all uniform servers. Some servers may be legacy, some may be with latest processor ). In future, they may be added with some add-on accelerators or there could be compute nodes with next generation processors. Also, compute nodes could be from different OEMs.  Every time there is new node introduced or enhanced with new add-on accelerators, would it be considered as a new blueprint or is that considered as new version of existing blueprint?  New version?



>>One proposal that I’ve started to sketch in the charter doc is the concept of a Blueprint Specification that accompanies a more generic Blueprint

>>The Specification would precisely and declaratively define the HW and SW in a POD and the LCM approach to a deployment of the POD described by that Blueprint + Blueprint Specification.

>>The Blueprint Specification would need to be layered (e.g., HW, networking, virtualization layers etc etc) and would allow one to do exactly what you described above (Openstack-x86HW-Ubuntu-


NoSDNCtrl-v1 OR K8S-x86HW-Ubuntu-OVN-SRIOVCtrl-v1 etc in the same Blueprint)


>>Subsequent releases of Akraino for a given Blueprint would add options that could be chosen to form the Blueprint’s Specification

>>The individual functions/plugins/HW in a precise Blueprint’s Specification would be selected from the Blueprint Specification Template.

>>The Blueprint Specification Template would contain all the options supported for a given Blueprint at a given Akraino release.


  1. Is OS version expected to be common across all servers? If there is a flexibility, adding a new OS version considered as new blueprint or a new version of existing blueprint? New version?


>>We have to support different OS versions and distros in different PODs for sure but should one POD support a mix in the (single) POD?

>>Technically it could but I feel it’ll make the definition and management extremely complex

>>But in forming the definition of a blueprint we should consider it for future as we may then want to structure that definition to support in future multiple options of the same component.


  1. Any support for new site level orchestrator requires new blueprint.  Yes?


  1. Any support for new SDN controller requires new blueprint. Yes?



>>I don’t think so if an SDN controller is a plugin option within the Blueprint’s Specification. If not then yes.

>>I could deploy openstack in a Network Cloud Blueprint by selecting different controllers from the set of controller options (e.g. neutron without a controller, ODL or TitaniumFabric controller etc – assuming all were available and tested in a given Akraino release of the ‘Network Cloud’ Blueprint)


  1. Any support for new fabric switches requires new blueprint. Yes?


>>I wouldn’t assume so by default

>>Let’s say for example a blueprint + spec require L2 only between deployed HW and used only MAC learning – changing the fabric switches out doesn’t have to change the deployment of the POD if the two switches have the same capability.

>>The key question is is the switch fabric managed by the LCM and deployment tools of the Blueprint? If so then you’d need to at least have an option for the different switches in the Blueprint Specification (or have a different Blueprint)


  1. Any addition of additional SW packages to NFVI requires new blueprint. Yes?


>>Again don’t think so – this will explode the number of Blueprints

>>A different selection of functionalities from those support in a given the Blueprint Specification Template for a given Akraino release would allow one to deploy different SW within the same Blueprint

>>BUT the same question arises about mixing in the same POD deployed by a given Blueprint + Specification as you raised for the OS mixing

>>Think we need more discussion here


  1. If there is a version change in Openstack (say moving from Newton to Pike), SDN Controller or K8S, does it require new blueprint or a new version of the blueprint?  New version?

>>I’d see this as a change to the blueprint specification not the blueprint itself – i.e. one selects a different OS SW “plugin’ from the template


Just few questions for discussions J






Join to automatically receive all group messages.