Re: Akraino Charter
I think many (most?) vendors who submit a blueprint do intend to support the exact specification of the blueprint they submit.
Either as an in-house solution, or commercially as an ecosystem vendor.
I also concur with Oliver’s note that the goal of Akraino is to have an end to end configuration for a particular edge use case which is complete, tested and production deployable (as stated above, either in-house or commercially). I think that was always the original intent of the Akraino project as stated back in the original Portland meetup in May.
This does require a fair amount of specificity as to HW and SW components. I believe this is different from a scenario in OPNFV, which is more of a reference platform.
We may want to have different ‘levels’ of specificity; i.e reference platforms that are more modular, and then specific blueprints that define a specific end-to-end deployable configuration that could be based on a ‘reference’ or scenario. I think that was the meat of the discussion yesterday in the community call about having different levels (but I had to drop early from that call).
From: main@... [mailto:main@...] On Behalf Of Andrew Wilkinson
Sent: Friday, September 07, 2018 6:58 AM
Subject: Re: [akraino] Akraino Charter
Just one point to the comment:
“but vendors will in most cases not support the exact combination as integrated/tested in Akraino,”
I think that’s what we can enable with a given blueprint and precise layer options by means of a (set of) precise specification(s) – then a infra/service vendor will have the option to offer and support a configuration of a Akraino blueprint (in what ever commercial model [e.g. support etc]) and also for VNF vendors to certify their applications for running on a POD deployed using a given blueprint with an exact blueprint specification(s).
Or an operator can decide to support that specification for a blueprint themselves in house.
From: main@... <main@...>
On Behalf Of fzdarsky@...
On Fri, Sep 7, 2018 at 12:02 AM Srini <srinivasa.r.addepalli@...> wrote:
The current code is only the seed code that AT&T generously donated to kick-start the project. Obviously there are pieces missing and maybe pieces we may not need / want to do differently later. I wouldn't extrapolate from the seed code to the project's mission/focus that the community needs to agree on.
Then, notice the code is integration code. It pulls in pieces from Airship and other upstreams (which themselves are development projects).
Ok, if you're not talking about patches for backporting but to fix/add functionality this would be forking and I hope we'll establish a clear upstream-first policy to prevent forking. Apart from that it would not be good citizenship to not upstream / not give upstream the chance to address gaps first, we'd accumulate technical debt that we don't want. I often hear the argument that these patches are "temporary" to enable us to move fast and that there's the intention to eventually upstream... which from experience hardly ever happens later.
It's a different, of course, if we're talking about plugins like in your Barometer example.
Could you maybe elaborate why you feel it matters whether building images is part of the project or not?
In my view, we should eventually build images to make internal testing and test-driving by users easier. But I'd like to avoid the trap of putting too much emphasis on the images; how they are built, how to harden them, how to tune/optimize them, etc. Just mentioning this, as it's a topic that typically comes up sooner or later (like recently in ONAP). And it's important to understand that users will likely throw away our images and rebuild & re-test anyway.
I'd expect that we'll identify gaps as a result of the integration. If it's a gap in an upstream project, we should absolutely strive to address those gaps there and never carry patches against an upstream project. If it's a functional gap not addressed by our current upstreams, we should try to find projects that do something similar and see whether we can extend those. Developing ourselves should be last resort and then as an independent (sub-)project that will need to prove its value to the larger community over time. Plus it should be possible to swap it out for other solutions if they develop elsewhere.
Whatever code we produce ourselves should of course strive towards production quality. But it's not our task to create patches to fix upstream projects or even do backports to an upstream's stable branch! That's the job of the upstream projects themselves.
And because everyone, both upstreams and us, has finite engineering resources, they have to trade off how much time they spend on backports to older, stable branches vs how much time they invest into new features. Plus how many HW/SW configurations they are able to fully test.
Now consider an edge stack that consists of dozens of components. It's already difficult to find combinations of versions of components that play well together, let alone the complexity of doing backports across all of them.
Next, consider how users would consume Akraino: Would they just download our/upstream images and run them in production? Of course not! They'd have to get commercial support from vendors (unless they have many engineers to burn to go DIY), but vendors will in most cases not support the exact combination as integrated/tested in Akraino, so the effort we put in there has a relatively low ROI...
TBH, this all makes the goal of "production quality" for the whole Akraino stack rather aspirational than realistic...