Date   

Re: [REC] Install and Networking deployment

Tina Tsou
 

Dear Andy et al,

We talked about it at REC meeting just now.

Radio Edge Cloud and SEBA weekly meeting

When:
Thursday, 30 July 2020
1:00pm to 2:00pm
(GMT+00:00) UTC

Where:
https://zoom.us/j/941470503


Thank you,
Tina ^ ^

On Jul 30, 2020, at 6:21 AM, 詹子儀 Andy Chan via lists.akraino.org <andychan=iii.org.tw@...> wrote:

Dear Community,

We have successfully installed StarlingX before and we are now in the process of installing REC in our server. (We used build 237 image)

I was wondering
(1)    In the last response, we know that we can study the user_config.yaml to learn the install REC. But when I finish the study, I was confusing that how to modify and re-write the code?

(2)    If the testing environment has three different IP addresses, Is that necessary using route modify VLAN?

(3)    I success to build controller1, thought the diagram we can using controller 1 deploy, but how controller2 and controller3 receive the sign to install, like StarlingX, it will need us to let the server in PXE mode, but in REC, we have no idea doing the deploy?

........................................................
詹子儀 | Andy Chan
Office: +886 6607-3242
Email: andychan@...

-----Original Message-----
From: CARVER, PAUL <pc2929@...>
Sent: Tuesday, July 21, 2020 9:54 PM
To: 詹子儀 Andy Chan <andychan@...>; technical-discuss@...
Cc: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: RE: [REC] Install and Networking deployment

The physical networking we use is shown in the diagram at the bottom of this page: https://wiki.akraino.org/display/AK/Radio+Edge+Cloud+Validation+Lab



The labels on the lines explain that we use two cables per server to carry the OAM, storage and infra traffic and two cables per server to carry the tenant/application traffic. In the OpenEdge servers the IPMI traffic is carried over the single cable to the chassis management port.



There’s some flexibility in the networking, but the first node in the cluster must have layer 3 connectivity to the IPMI/BMC interface of the other nodes in the cluster in order to perform the installation. The user_config.yaml gives you the ability to define multiple networks. Typically we create a VLAN on the switch for each network. Generally we define an “infra internal”, “infra external” and “storage” network in addition to the IPMI network for the base functionality of the platform. Then we also define several additional networks for the RIC that are dependent on the specifics of the carrier network into which the RIC is being deployed.



You are correct that the controller nodes also function as worker nodes. You can configure all the same VLANs on every server.



There’s a simple example user_config.yaml here https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Exampleuser_config.yaml <https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Exampleuser_config.yaml>





From: technical-discuss@... <technical-discuss@...> On Behalf Of ??? Andy Chan
Sent: Tuesday, July 21, 2020 01:08
To: technical-discuss@...
Cc: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: [Akraino Technical-Discuss] [REC] Install and Networking deployment



Dear Community,



We have successfully installed StarlingX before and we are now in the process of installing REC on a 4U Network Appliance, Adlink ALSP-7400, in preparation for a demo at GlobeCom in December. (We used build 237 image)



I was wondering

1.    if there is a network diagram describing the interconnection of the controller nodes and worker nodes for REC?

*    Something similar to https://docs.starlingx.io/_images/starlingx-deployment-options-duplex3.png <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.starlingx.io_-5Fimages_starlingx-2Ddeployment-2Doptions-2Dduplex3.png&d=DwMFog&c=LFYZ-o9_HUMeMTSQicvjIg&r=HBNonG828PGilNRNwXAtdg&m=RPC3DlaHHjlL2SDsSgL0ycjGD0NlFfqVUW4HCNlM4Yc&s=ngUJsu2dTYvJKq-HAT2eaYCouH-3YzJCEVUdarLGjnI&e=>   for StarlingX, wherein in the diagram it clearly shows which nodes are connected by (a) Data Network (b) Management Network (c) IPMI Network (d) OAM Network, and
*    With that, it would be easier to figure out how to configure  external network, and internal network



2.    For user_config.yaml, are there specific keywords to which modification is a MUST? (and, any place that I can find samples to reference)



3.    If Kubernetes works if only 3 controller nodes and no worker nodes are used?  If yes, doesn’t that imply a controller node is to play dual role as both a controller and a worker? (and, possibly also Ceph storage nodes?)



Sincerely Yours,



........................................................
詹子儀 | Andy Chan
Office: +886 6607-3242
Email: andychan@... <mailto:andychan@...>  








IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.


Re: [REC] Install and Networking deployment

詹子儀 Andy Chan <andychan@...>
 

Dear Community,

We have successfully installed StarlingX before and we are now in the process of installing REC in our server. (We used build 237 image)

I was wondering
(1) In the last response, we know that we can study the user_config.yaml to learn the install REC. But when I finish the study, I was confusing that how to modify and re-write the code?

(2) If the testing environment has three different IP addresses, Is that necessary using route modify VLAN?

(3) I success to build controller1, thought the diagram we can using controller 1 deploy, but how controller2 and controller3 receive the sign to install, like StarlingX, it will need us to let the server in PXE mode, but in REC, we have no idea doing the deploy?

........................................................
詹子儀 | Andy Chan
Office: +886 6607-3242
Email: andychan@...

-----Original Message-----
From: CARVER, PAUL <pc2929@...>
Sent: Tuesday, July 21, 2020 9:54 PM
To: 詹子儀 Andy Chan <andychan@...>; technical-discuss@...
Cc: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: RE: [REC] Install and Networking deployment

The physical networking we use is shown in the diagram at the bottom of this page: https://wiki.akraino.org/display/AK/Radio+Edge+Cloud+Validation+Lab



The labels on the lines explain that we use two cables per server to carry the OAM, storage and infra traffic and two cables per server to carry the tenant/application traffic. In the OpenEdge servers the IPMI traffic is carried over the single cable to the chassis management port.



There’s some flexibility in the networking, but the first node in the cluster must have layer 3 connectivity to the IPMI/BMC interface of the other nodes in the cluster in order to perform the installation. The user_config.yaml gives you the ability to define multiple networks. Typically we create a VLAN on the switch for each network. Generally we define an “infra internal”, “infra external” and “storage” network in addition to the IPMI network for the base functionality of the platform. Then we also define several additional networks for the RIC that are dependent on the specifics of the carrier network into which the RIC is being deployed.



You are correct that the controller nodes also function as worker nodes. You can configure all the same VLANs on every server.



There’s a simple example user_config.yaml here https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Exampleuser_config.yaml <https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Exampleuser_config.yaml>





From: technical-discuss@... <technical-discuss@...> On Behalf Of ??? Andy Chan
Sent: Tuesday, July 21, 2020 01:08
To: technical-discuss@...
Cc: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: [Akraino Technical-Discuss] [REC] Install and Networking deployment



Dear Community,



We have successfully installed StarlingX before and we are now in the process of installing REC on a 4U Network Appliance, Adlink ALSP-7400, in preparation for a demo at GlobeCom in December. (We used build 237 image)



I was wondering

1. if there is a network diagram describing the interconnection of the controller nodes and worker nodes for REC?

* Something similar to https://docs.starlingx.io/_images/starlingx-deployment-options-duplex3.png <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.starlingx.io_-5Fimages_starlingx-2Ddeployment-2Doptions-2Dduplex3.png&d=DwMFog&c=LFYZ-o9_HUMeMTSQicvjIg&r=HBNonG828PGilNRNwXAtdg&m=RPC3DlaHHjlL2SDsSgL0ycjGD0NlFfqVUW4HCNlM4Yc&s=ngUJsu2dTYvJKq-HAT2eaYCouH-3YzJCEVUdarLGjnI&e=> for StarlingX, wherein in the diagram it clearly shows which nodes are connected by (a) Data Network (b) Management Network (c) IPMI Network (d) OAM Network, and
* With that, it would be easier to figure out how to configure external network, and internal network



2. For user_config.yaml, are there specific keywords to which modification is a MUST? (and, any place that I can find samples to reference)



3. If Kubernetes works if only 3 controller nodes and no worker nodes are used? If yes, doesn’t that imply a controller node is to play dual role as both a controller and a worker? (and, possibly also Ceph storage nodes?)



Sincerely Yours,



........................................................
詹子儀 | Andy Chan
Office: +886 6607-3242
Email: andychan@... <mailto:andychan@...>


Akraino Technical Community Call (Weekly) - Thu, 07/30/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 30 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Akraino Technical Community Call (Weekly) - Thu, 07/30/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 30 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Akraino Technical Community Call (Weekly) - Thu, 07/23/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 23 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Akraino Technical Community Call (Weekly) - Thu, 07/23/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 23 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Re: [REC] Install and Networking deployment

Paul Carver
 

The physical networking we use is shown in the diagram at the bottom of this page: https://wiki.akraino.org/display/AK/Radio+Edge+Cloud+Validation+Lab

 

The labels on the lines explain that we use two cables per server to carry the OAM, storage and infra traffic and two cables per server to carry the tenant/application traffic. In the OpenEdge servers the IPMI traffic is carried over the single cable to the chassis management port.

 

There’s some flexibility in the networking, but the first node in the cluster must have layer 3 connectivity to the IPMI/BMC interface of the other nodes in the cluster in order to perform the installation. The user_config.yaml gives you the ability to define multiple networks. Typically we create a VLAN on the switch for each network. Generally we define an “infra internal”, “infra external” and “storage” network in addition to the IPMI network for the base functionality of the platform. Then we also define several additional networks for the RIC that are dependent on the specifics of the carrier network into which the RIC is being deployed.

 

You are correct that the controller nodes also function as worker nodes. You can configure all the same VLANs on every server.

 

There’s a simple example user_config.yaml here https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Exampleuser_config.yaml

 

 

From: technical-discuss@... <technical-discuss@...> On Behalf Of ??? Andy Chan
Sent: Tuesday, July 21, 2020 01:08
To: technical-discuss@...
Cc:
蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: [Akraino Technical-Discuss] [REC] Install and Networking deployment

 

Dear Community,

 

We have successfully installed StarlingX before and we are now in the process of installing REC on a 4U Network Appliance, Adlink ALSP-7400, in preparation for a demo at GlobeCom in December. (We used build 237 image)

 

I was wondering

  1. if there is a network diagram describing the interconnection of the controller nodes and worker nodes for REC?

 

  1. For user_config.yaml, are there specific keywords to which modification is a MUST? (and, any place that I can find samples to reference)

 

  1. If Kubernetes works if only 3 controller nodes and no worker nodes are used?  If yes, doesn’t that imply a controller node is to play dual role as both a controller and a worker? (and, possibly also Ceph storage nodes?)

 

Sincerely Yours,

 

........................................................
詹子儀 Andy Chan
Office: +886 6607-3242
Email: andychan@...

 


[REC] Install and Networking deployment

詹子儀 Andy Chan <andychan@...>
 

Dear Community,

 

We have successfully installed StarlingX before and we are now in the process of installing REC on a 4U Network Appliance, Adlink ALSP-7400, in preparation for a demo at GlobeCom in December. (We used build 237 image)

 

I was wondering

(1)   if there is a network diagram describing the interconnection of the controller nodes and worker nodes for REC?

l  Something similar to https://docs.starlingx.io/_images/starlingx-deployment-options-duplex3.png  for StarlingX, wherein in the diagram it clearly shows which nodes are connected by (a) Data Network (b) Management Network (c) IPMI Network (d) OAM Network, and

l  With that, it would be easier to figure out how to configure  external network, and internal network

 

(2)   For user_config.yaml, are there specific keywords to which modification is a MUST? (and, any place that I can find samples to reference)

 

(3)   If Kubernetes works if only 3 controller nodes and no worker nodes are used?  If yes, doesn’t that imply a controller node is to play dual role as both a controller and a worker? (and, possibly also Ceph storage nodes?)



Sincerely Yours,

 

........................................................
詹子儀 Andy Chan
Office: +886 6607-3242
Email: andychan@...

 


Akraino Technical Community Call (Weekly) - Thu, 07/16/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 16 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Akraino Technical Community Call (Weekly) - Thu, 07/16/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 16 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 07/09/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 9 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 07/09/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 9 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Re: About Akraino REC "RIC Platform" and APIs related to CU/DU-specific parameters

Paul Carver
 

The scope of what the REC platform can provide will be determined by the number of contributors. A lot of AT&T’s current focus on the REC relates to internal use of the REC in its current state, so adding contributors who want to do more with it will be a big determining factor in how much we expand the scope.

 

In the diagram shown here https://wiki.akraino.org/display/AK/REC+Architecture+Document the REC project is currently focused on the section in orange with the intention of supporting the section in light blue. The REC team is not actively working on the components in the RIC Platform or RIC xApps (although other non-REC teams within AT&T are).

 

Aspirationally  I would like to see the REC support CU/DU components on top of the same base (the orange section of the diagram) but we’ll need more contributors in order to create that integration. The primary focus of the REC team is on the CI system with the build and packaging of components into an installable images as well as the CD system with the automated execution of a zero touch install and running of automated tests. We need help from the teams responsible for RIC, CU and DU to work on the integration of those components as well as providing automated tests for the interfaces of those components.

 

From: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>

Sent: Sunday, July 5, 2020 22:52
To: CARVER, PAUL <pc2929@...>; technical-discuss@...
Cc: 詹子儀 Andy Chan <andychan@...>; 詹凱元 Kai Yuan Jan <kevinjan@...>
Subject: About Akraino REC "RIC Platform" and APIs related to CU/DU-specific parameters

 

Dear Paul,

 

A quick question –

Will  functions related to upstreaming CU/DU-specific parameters will be up to each vendor’s

own implementation as add-on to the REC platform?

 

 

 

Based on REC Architecture Document

https://wiki.akraino.org/display/AK/REC+Architecture+Document

 

 

For REC architecture, we have from top to bottom

(1)    O-RAN-SC  --  with RIC xApps and RIC Platform

(2)    Telco Appliance -- with APIs, Middleware, Deployment

(3)    Hardware

 

If I developed an RIC xApp (say, with function of adjusting transmission power of base station),

And would like to leverage Akraino REC,
If REC is to offer any utility APIs to accelerate the development, will that be “RIC Platform”?

(as  Telco Appliance” part is generic cloud infrastructure, not specific to RIC)

 

Or, specifically, can I say, “RIC Platform” is actually the O-RAN nRT-RIC Software, such as the recent Amber Release, packaged into a Deployment Package?  (and I shall study the code of Amber release,

Instead of studying the code the REC)

 

Sincerely,

Frank

 

 

P.S. So far, from the Architecture document ”RIC Platform” has components --a1mediator, appmgr, dbass, e2mgr, e2term, jaegeradapter, rtmgr, submgr, vespamgr, tiller, kong, rsm

 

It seems that these are so generic.  So functions related to CU/DU-specific parameters will be up to each vendor’s own implementation as add-on?

 

 

 

From: CARVER, PAUL [mailto:pc2929@...]
Sent: Monday, June 8, 2020 8:52 PM
To:
蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

I couldn’t remember where those numbers came from so I asked a few people who were involved when we started the project. I know we never did any testing to try to find the absolute minimum disk required, so it looks like we just documented the actual disk sizes of the hardware we used for testing the system as a whole. We have tested REC on Dell and HP machines with far more disks than are required because we were able to borrow the servers from other projects rather than buying specifically for REC, but the numbers you’re asking about are derived from our “reference platform”, i.e. the specific cluster that Jenkins automatically installs the REC onto in our Continuous Deployment pipeline as described in https://wiki.akraino.org/display/AK/Radio+Edge+Cloud+Validation+Lab

 

In that cluster we are using the Nokia OpenEdge single height, half width server blade which is equipped with two 480GB M.2 boards (i.e. something that looks similar to a RAM module plugged into the motherboard) and two 960GB 2.5 inch SSDs in front mounted slots. Since the minimum cluster size is three controllers and zero workers the total disk capacity of a minimum size cluster on this specific hardware is 6 of the M.2 boards and 6 of the 2.5 inch disks. That’s where the numbers on that wiki page are coming from.

 

In actuality, the specific cluster in our CD cluster is using all 5 blades in a single chassis for a total of 10 of the 480GB M.2 boards and 10 of the 960GB 2.5 inch drives, but one of the M.2 boards in each server is unused. We had considered doing RAID1 on the pair of M.2 boards but these servers do not support hardware RAID and we decided that adding software RAID was not a priority since our target deployment has three controllers and two workers. All of the 2.5 inch drives in the cluster are managed by Ceph. In order to use Ceph in REC it is highly recommended to give two physical disks per server to Ceph, but the exact size of the disks isn’t very important.

 

We have other REC clusters in various labs deployed with 4 or 3 servers, so it certainly isn’t necessary to have 10 of each drive type.

 

From: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>
Sent: Thursday, June 4, 2020 10:24
To: CARVER, PAUL <pc2929@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Dear Paul,

 

Thanks very much for the clarity.  It makes me get better understanding of the underlying design considerations.

For the kernel and latency issue, yes, some benchmark is needed.  Yet, for us,

We plan to try to make sure functionality is OK, and later,  rather than sticking with

Kernel 3.10 and try tuning performance, we shall fix the driver issues and upgrade to kernel4.14.

So, as deployment unit of 5 servers in a 3U chassis has been a convenient size where 3 of them are controllers

and the other 2 servers are not controllers, I trust that “Total SSD-based OS Storage:  2.8 TB (6 x 480GB SSDs)” specified in

the “Hardware Requirements” is an empirical value also based on  5 servers?

Thanks again,

Frank

 

From: CARVER, PAUL [mailto:pc2929@...]
Sent: Thursday, June 4, 2020 10:02 PM
To:
蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Here are my answers. Nokia provided the original seed code for REC/TA based on previous closed source work they had done, so there might also be historical factors that I’m not aware of.

 

  1. Yes, there are several components in the system that use a quorum based mechanism where you need more than 50% of the total components to be in communications with each other in order to differentiate between a failure vs a split brain scenario. That effectively makes the HA minimum 3 instead of 2. There could conceivably be any odd number greater than 1 for HA but all of our testing has been based on 3.

 

  1. Yes, “node” is referring to physical servers. Our focus on REC/TA is on integration testing a full platform suitable for production deployment in edge locations where there isn’t a pre-existing general purpose cloud. The distinction between RIC (an O-RAN SC project) and REC (an Akraino project) is that RIC is purely software whereas REC is concerned with providing the cloud infrastructure hardware on top of which the RIC can be deployed.

 

  1. For the joint work between Nokia and AT&T to create the initial release of REC/TA we selected the Nokia OpenEdge chassis based platform as the reference hardware platform for our testing. We also did some testing on Dell and HP hardware and have subsequently seen the support for Ampere’s ARM servers added, but we started with the Nokia OpenEdge which packages 5 servers into a 3U chassis. As such, the vast majority of our testing has been on clusters of 5 servers where 3 of them are controllers and the other 2 servers are not controllers. Our use case is primarily on deploying a large number of small clusters in many different edge locations rather than general purpose datacenter cloud, so the deployment unit of 5 servers in a 3U chassis has been a convenient size.

 

  1. I don’t think we’re currently using DPDK. REC/TA had its roots in a previous Nokia closed source OpenStack deployment system, so Open vSwitch DPDK support was provided with OpenStack Neutron, but REC/TA is not intended as an OpenStack system. It is intended as a pure Kubernetes system that makes use of a few OpenStack components such as Ironic for baremetal deployment and Keystone for authentication.

 

  1. The kernel version was selected specifically after some benchmarking of latency. It’s not a realtime kernel, but latency is a major consideration for the RIC which is the first and primary application that we designed REC to support. That’s not to say that it’s the only kernel version that will support the latency requirements, but if you use a different kernel then it would definitely be a good idea to run some latency benchmarks. I would have to dig through some old documentation to refresh my memory on what tests we ran. I know cyclictest is a popular latency benchmark but I’m pretty sure it wasn’t the only one that we compared.

 

From: technical-discuss@... <technical-discuss@...> On Behalf Of ??? FRANK C. D. TSAI, Ph.D.
Sent: Thursday, June 4, 2020 06:58
To: technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: [Akraino Technical-Discuss] About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Dear community,

 

 

I was wondering if I can sort out some of my questions of Akraino (while I’m using Adlink Adlink H/W ALPS-7400 as the H/W platform for REC, which is a 4U network appliance with 4 computing nodes).

 

Per REC Installation Guide (*), controller nodes (3 required), worker nodes (all optional)

 

  1. Is the purpose of 3 controller nodes for HA (high availability)? 

[note: the reason I suppose so is because our past experience with StarlingX using two controller nodes for HA]

 

  1. When we use the term “node” in the document, do we mean physical nodes? Or it can be logical nodes (like VM)?

[note; I suppose it’s physical nodes because in the same page Hardware Requirements, it reads “Minimum of 3 nodes” and if it’s for HA, we need physical node]

 

  1. When we say “controller nodes (3 required), worker nodes (all optional)”, do we mean to let the controller nodes play dual role also as worker nodes by running applications on top of any controller node? 

 

Or we mean to say that REC deployment must have exactly 3 controller nodes, and after the 3 controller nodes are established, we can (and we should) then gradually add worker nodes to scale out?

 

  1. Is it a must for the NIC of a worker node to support DPDK? if no DPDK is supported, what would be the most significant impact (like some function XYZ is then not available) ?

 

  1. Is using 4.14 is simply because 4.14 newer than 3.10?  Or, it’s because some components of REC must rely on 4.14?

 

[note: I so inquire is because some ALPS-7400 peripheral driver is currently compatible with kernel 3.10, yet Build-237 uses kernel 4.14.  After downloading the ISO (Build-237), we plan to replace the kernel modules from 4.14 to 3.10.]

 

Thank you very much for your guidance,

 

 

Sincerely,

Frank

 

(*) https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-HardwareRequirements

 

 


About Akraino REC "RIC Platform" and APIs related to CU/DU-specific parameters

蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>
 

Dear Paul,

 

A quick question –

Will  functions related to upstreaming CU/DU-specific parameters will be up to each vendor’s

own implementation as add-on to the REC platform?

 

 

 

Based on REC Architecture Document

https://wiki.akraino.org/display/AK/REC+Architecture+Document

 

 

For REC architecture, we have from top to bottom

(1)    O-RAN-SC  --  with RIC xApps and RIC Platform

(2)    Telco Appliance -- with APIs, Middleware, Deployment

(3)    Hardware

 

If I developed an RIC xApp (say, with function of adjusting transmission power of base station),

And would like to leverage Akraino REC,
If REC is to offer any utility APIs to accelerate the development, will that be “RIC Platform”?

(as  Telco Appliance” part is generic cloud infrastructure, not specific to RIC)

 

Or, specifically, can I say, “RIC Platform” is actually the O-RAN nRT-RIC Software, such as the recent Amber Release, packaged into a Deployment Package?  (and I shall study the code of Amber release,

Instead of studying the code the REC)

 

Sincerely,

Frank

 

 

P.S. So far, from the Architecture document ”RIC Platform” has components --a1mediator, appmgr, dbass, e2mgr, e2term, jaegeradapter, rtmgr, submgr, vespamgr, tiller, kong, rsm

 

It seems that these are so generic.  So functions related to CU/DU-specific parameters will be up to each vendor’s own implementation as add-on?

 

 

 

From: CARVER, PAUL [mailto:pc2929@...]
Sent: Monday, June 8, 2020 8:52 PM
To:
蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

I couldn’t remember where those numbers came from so I asked a few people who were involved when we started the project. I know we never did any testing to try to find the absolute minimum disk required, so it looks like we just documented the actual disk sizes of the hardware we used for testing the system as a whole. We have tested REC on Dell and HP machines with far more disks than are required because we were able to borrow the servers from other projects rather than buying specifically for REC, but the numbers you’re asking about are derived from our “reference platform”, i.e. the specific cluster that Jenkins automatically installs the REC onto in our Continuous Deployment pipeline as described in https://wiki.akraino.org/display/AK/Radio+Edge+Cloud+Validation+Lab

 

In that cluster we are using the Nokia OpenEdge single height, half width server blade which is equipped with two 480GB M.2 boards (i.e. something that looks similar to a RAM module plugged into the motherboard) and two 960GB 2.5 inch SSDs in front mounted slots. Since the minimum cluster size is three controllers and zero workers the total disk capacity of a minimum size cluster on this specific hardware is 6 of the M.2 boards and 6 of the 2.5 inch disks. That’s where the numbers on that wiki page are coming from.

 

In actuality, the specific cluster in our CD cluster is using all 5 blades in a single chassis for a total of 10 of the 480GB M.2 boards and 10 of the 960GB 2.5 inch drives, but one of the M.2 boards in each server is unused. We had considered doing RAID1 on the pair of M.2 boards but these servers do not support hardware RAID and we decided that adding software RAID was not a priority since our target deployment has three controllers and two workers. All of the 2.5 inch drives in the cluster are managed by Ceph. In order to use Ceph in REC it is highly recommended to give two physical disks per server to Ceph, but the exact size of the disks isn’t very important.

 

We have other REC clusters in various labs deployed with 4 or 3 servers, so it certainly isn’t necessary to have 10 of each drive type.

 

From: 蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>
Sent: Thursday, June 4, 2020 10:24
To: CARVER, PAUL <pc2929@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Dear Paul,

 

Thanks very much for the clarity.  It makes me get better understanding of the underlying design considerations.

For the kernel and latency issue, yes, some benchmark is needed.  Yet, for us,

We plan to try to make sure functionality is OK, and later,  rather than sticking with

Kernel 3.10 and try tuning performance, we shall fix the driver issues and upgrade to kernel4.14.

So, as deployment unit of 5 servers in a 3U chassis has been a convenient size where 3 of them are controllers

and the other 2 servers are not controllers, I trust that “Total SSD-based OS Storage:  2.8 TB (6 x 480GB SSDs)” specified in

the “Hardware Requirements” is an empirical value also based on  5 servers?

Thanks again,

Frank

 

From: CARVER, PAUL [mailto:pc2929@...]
Sent: Thursday, June 4, 2020 10:02 PM
To:
蔡其達 FRANK C. D. TSAI, Ph.D. <ftsai@...>; technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: RE: About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Here are my answers. Nokia provided the original seed code for REC/TA based on previous closed source work they had done, so there might also be historical factors that I’m not aware of.

 

1)      Yes, there are several components in the system that use a quorum based mechanism where you need more than 50% of the total components to be in communications with each other in order to differentiate between a failure vs a split brain scenario. That effectively makes the HA minimum 3 instead of 2. There could conceivably be any odd number greater than 1 for HA but all of our testing has been based on 3.

 

2)      Yes, “node” is referring to physical servers. Our focus on REC/TA is on integration testing a full platform suitable for production deployment in edge locations where there isn’t a pre-existing general purpose cloud. The distinction between RIC (an O-RAN SC project) and REC (an Akraino project) is that RIC is purely software whereas REC is concerned with providing the cloud infrastructure hardware on top of which the RIC can be deployed.

 

3)      For the joint work between Nokia and AT&T to create the initial release of REC/TA we selected the Nokia OpenEdge chassis based platform as the reference hardware platform for our testing. We also did some testing on Dell and HP hardware and have subsequently seen the support for Ampere’s ARM servers added, but we started with the Nokia OpenEdge which packages 5 servers into a 3U chassis. As such, the vast majority of our testing has been on clusters of 5 servers where 3 of them are controllers and the other 2 servers are not controllers. Our use case is primarily on deploying a large number of small clusters in many different edge locations rather than general purpose datacenter cloud, so the deployment unit of 5 servers in a 3U chassis has been a convenient size.

 

4)      I don’t think we’re currently using DPDK. REC/TA had its roots in a previous Nokia closed source OpenStack deployment system, so Open vSwitch DPDK support was provided with OpenStack Neutron, but REC/TA is not intended as an OpenStack system. It is intended as a pure Kubernetes system that makes use of a few OpenStack components such as Ironic for baremetal deployment and Keystone for authentication.

 

5)      The kernel version was selected specifically after some benchmarking of latency. It’s not a realtime kernel, but latency is a major consideration for the RIC which is the first and primary application that we designed REC to support. That’s not to say that it’s the only kernel version that will support the latency requirements, but if you use a different kernel then it would definitely be a good idea to run some latency benchmarks. I would have to dig through some old documentation to refresh my memory on what tests we ran. I know cyclictest is a popular latency benchmark but I’m pretty sure it wasn’t the only one that we compared.

 

From: technical-discuss@... <technical-discuss@...> On Behalf Of ??? FRANK C. D. TSAI, Ph.D.
Sent: Thursday, June 4, 2020 06:58
To: technical-discuss@...
Cc:
詹子儀 Andy Chan <andychan@...>; Sean Xie <sean.xie@...>
Subject: [Akraino Technical-Discuss] About Akraino REC intallation (min 3 controller nodes with 192GB minimum per node)

 

Dear community,

 

 

I was wondering if I can sort out some of my questions of Akraino (while I’m using Adlink Adlink H/W ALPS-7400 as the H/W platform for REC, which is a 4U network appliance with 4 computing nodes).

 

Per REC Installation Guide (*), controller nodes (3 required), worker nodes (all optional)

 

1.      Is the purpose of 3 controller nodes for HA (high availability)? 

[note: the reason I suppose so is because our past experience with StarlingX using two controller nodes for HA]

 

2.      When we use the term “node” in the document, do we mean physical nodes? Or it can be logical nodes (like VM)?

[note; I suppose it’s physical nodes because in the same page Hardware Requirements, it reads “Minimum of 3 nodes” and if it’s for HA, we need physical node]

 

3.      When we say “controller nodes (3 required), worker nodes (all optional)”, do we mean to let the controller nodes play dual role also as worker nodes by running applications on top of any controller node? 

 

Or we mean to say that REC deployment must have exactly 3 controller nodes, and after the 3 controller nodes are established, we can (and we should) then gradually add worker nodes to scale out?

 

4.      Is it a must for the NIC of a worker node to support DPDK? if no DPDK is supported, what would be the most significant impact (like some function XYZ is then not available) ?

 

5.      Is using 4.14 is simply because 4.14 newer than 3.10?  Or, it’s because some components of REC must rely on 4.14?

 

[note: I so inquire is because some ALPS-7400 peripheral driver is currently compatible with kernel 3.10, yet Build-237 uses kernel 4.14.  After downloading the ISO (Build-237), we plan to replace the kernel modules from 4.14 to 3.10.]

 

Thank you very much for your guidance,

 

 

Sincerely,

Frank

 

(*) https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-HardwareRequirements

 

 


Cancelled Event: Akraino Technical Community Call (Weekly) - Thursday, 2 July 2020 #cal-cancelled

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Cancelled: Akraino Technical Community Call (Weekly)

This event has been cancelled.

When:
Thursday, 2 July 2020
1:00pm to 2:00pm
(UTC+00:00) UTC

Where:
https://zoom.us/j/919148693

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 07/02/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 2 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 07/02/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 2 July 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Cancelled Event: Akraino Technical Community Call (Weekly) - Thursday, 25 June 2020 #cal-cancelled

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Cancelled: Akraino Technical Community Call (Weekly)

This event has been cancelled.

When:
Thursday, 25 June 2020
1:00pm to 2:00pm
(UTC+00:00) UTC

Where:
https://zoom.us/j/919148693

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 06/18/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 18 June 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM


Upcoming Event: Akraino Technical Community Call (Weekly) - Thu, 06/18/2020 1:00pm-2:00pm #cal-reminder

technical-discuss@lists.akraino.org Calendar <technical-discuss@...>
 

Reminder: Akraino Technical Community Call (Weekly)

When: Thursday, 18 June 2020, 1:00pm to 2:00pm, (GMT+00:00) UTC

Where:https://zoom.us/j/919148693

View Event

Organizer: technical-discuss@...

Description:

Akraino Technical Community Call: TSC updates to technical community and deeper dives into topics as applicable. Meeting content posted to Technical Community Wiki.
Meeting Lead: Kandan Kathirvel, Akraino TSC Chair


Akraino Edge Stack is inviting you to a scheduled Zoom meeting.
Join from PC, Mac, Linux, iOS or Android:
https://zoom.us/j/919148693
Or iPhone one-tap : US: +16699006833,,919148693# or +16465588656,,919148693#
Or Telephone: Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 880 1246 (Toll Free)
Meeting ID: 919 148 693
International numbers available:
https://zoom.us/u/adnlim1pfM