15:01:07 <number80> #startmeeting CentOS Cloud SIG meeting (2016-04-14)
15:01:07 <centbot> Meeting started Thu Apr 14 15:01:07 2016 UTC.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:07 <centbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:01:16 <rbowen> Still in that land-between-the-timezones between the US and the rest of the world.
15:01:37 <number80> rbowen, jzb: we all use UTC for meetings right?
15:01:53 <rbowen> Yes. Except when we don't. :-)
15:01:58 <number80> ack
15:02:02 <rbowen> I think all #centos-devel meetings use UTC.
15:02:06 <number80> #topic roll call
15:02:09 <dmsimard> o/
15:02:10 <number80> who we have here
15:02:11 <rbowen> o/
15:02:21 <number80> #chair dmsimard rbowen kbsingh
15:02:21 <centbot> Current chairs: dmsimard kbsingh number80 rbowen
15:02:27 <trown> o/
15:02:28 <mengxd_> o/
15:02:45 <number80> #chair trown mengxd_
15:02:45 <centbot> Current chairs: dmsimard kbsingh mengxd_ number80 rbowen trown
15:02:46 <jpena> o/
15:02:52 <number80> #chair jpena
15:02:52 <centbot> Current chairs: dmsimard jpena kbsingh mengxd_ number80 rbowen trown
15:02:54 <number80> agenda is here
15:02:58 <number80> https://etherpad.openstack.org/p/centos-cloud-sig
15:03:05 <kbsingh> you can have a chair, you can have a chair, everyone can have a chair!
15:03:11 <number80> ok, let's start with the bright news of the day :)
15:03:16 <number80> #topic newcomers
15:03:21 <number80> hello mengxd_ :)
15:03:28 <mengxd_> hello team
15:03:31 <rbowen> Welcome! It's so good to have a new name on the list.
15:03:39 <number80> yeah
15:03:43 <rbowen> Tell us what you're here for?
15:03:45 <kbsingh> Xiandong Meng from IBM.
15:03:49 * kbsingh does an intro
15:04:01 <kbsingh> I am currently the IBM Cloud Architect with focus on cloud enablement for Power platforms. I want to join in the Cloud SIG to enable RDO for CentOS/ppc64le. This may be a staged delivery but i do know there are many OpenPower users looking for that.
15:04:10 <kbsingh> ^ is mengxd_
15:04:25 <rbowen> That's *great*. Thanks for attending.
15:04:36 <number80> #info mengxd_ joins to help enabling RDO on ppc64le
15:04:39 <mengxd_> yes, i want to help the adoption of RDO/CentOS in the Power platforms
15:04:49 <dmsimard> Cool stuff.
15:05:02 <number80> mengxd_: will you be at the openstack summit?
15:05:07 <mengxd_> yes, i do
15:05:10 <kbsingh> i think it would also be good if we can have more openstack components and more people helping with the deps side of things
15:05:23 <number80> mengxd_: then you should join the RDO meetup to meet some of us :)
15:05:38 <mengxd_> Sure, i will.
15:06:01 <mengxd_> I will show up in your RDO booth to demo a PoC we have done before
15:06:08 <number80> excellent
15:06:42 <number80> I suggest that we continue on the list to identify tasks to make RDO/ppc64le happen
15:06:52 <number80> (as an official and supported port I mean)
15:07:31 <number80> next topic?
15:08:21 <number80> #topic OpenStack update
15:08:30 <number80> rbowen stage's yours
15:09:07 <rbowen> WEll, we announced the Mitaka packages on Tuesday
15:09:08 <rbowen> Yay.
15:09:16 <rbowen> And we have testing going on today.
15:09:26 <rbowen> So we're hopefully ferreting out anything that's broken.
15:09:31 <number80> #info RDO Mitaka GA announced on tuesday (april, 12)
15:09:39 <rbowen> This is the fastest we have ever pushed a release after the upstream release.
15:09:44 <number80> #info RDO test day is today April, 14
15:09:56 <trown> \o/
15:09:56 <rbowen> And we managed to coordinate the release announcement wonderfully with folks from various communities and press people.
15:09:59 <number80> rbowen: 2 hours after upstream GA is the new standard ;)
15:10:01 <rbowen> So I was *really* pleased.
15:10:23 <rbowen> I think we're kind of rolling several agenda items together here, so
15:10:45 <rbowen> I'm doing a series of interviews/podcasts/blog posts with people that worked on Mitaka, and the first one is going out as soon as this meeting is over.
15:10:56 <rbowen> If you want to brag about what you did on Mitaka, ping me, and we'll set up an interview.
15:11:05 <number80> reminds to free some time next week
15:11:19 <number80> to speak about prod chain
15:11:37 <rbowen> I can do interviews at Summit too, if that works better for anyone.
15:11:47 <number80> #info if you want to brag about your work on Mitaka, ping rbowen and get a free interview :)
15:11:51 <rbowen> It takes about 20 minutes of your time, and the result is about a 5 minute podcast.
15:11:54 <kbsingh> i am going to try and do a couple of blog posts around getting-started alternatives.
15:12:02 <hrw> hi
15:12:05 <kbsingh> (1) getting started, getting to first vm boot, without horizon
15:12:19 <kbsingh> (2) deploying a devstack alternative on CentOS7
15:12:20 <number80> hrw: hi
15:12:34 <rbowen> Anyways, that's all I have. I've been having a lot of fun the last few days.
15:12:48 <kbsingh> was going to work on those bits this week, but many fires. And need to be out of town next week, so i was thinking it might be good to do this maybe a week+ after summit ?
15:12:53 <kbsingh> sort of stay-fresh?
15:12:53 <number80> #info kbsingh to work on a series of blog posts on getting started w/ openstack on CentOS
15:13:07 <number80> kbsingh: wfm
15:14:18 <number80> ok
15:14:24 <number80> anything else?
15:14:40 <number80> then, let's welcome hrw
15:14:50 <number80> hrw could you introduce yourself to the SIG?
15:14:57 <hrw> yes, I can
15:16:16 <hrw> I am Marcin Juszkiewicz, working at Red Hat in ARM team. My area is AArch64 architecture porting. Helped porting RHEL7 to AArch64 which resulted in RHELSA release, did lot of mangling in Fedora/aarch64 (and bits for other secondary archs).
15:16:43 <hrw> now working as Red Hat assignee at Linaro and one of things for me to work on is Openstack/Centos/AArch64.
15:16:58 <kbsingh> woo!
15:17:02 <kbsingh> its raining Arch's
15:17:17 <rbowen> Awesome. Welcome, hrw. It's great to have you here.
15:17:21 <kbsingh> welcome
15:17:26 <hrw> building other people software since 2004
15:17:34 <number80> great
15:17:38 <mengxd_> welcome
15:17:47 <hrw> more on blog: https://marcin.juszkiewicz.com.pl/
15:17:53 <pino|work> kbsingh: is what we talked about earlier part of the current meeting agenda? :>
15:18:20 <kbsingh> pino|work: this one is more openstack/ cloud -infra setup
15:18:27 <pino|work> oki
15:18:55 <kbsingh> on the Arch side of things, one thing we should find a process for - and might beed alphacc and bstinson for - is to workout how diff Arch's will stay in sync
15:19:17 <Evolution> one /wi13
15:19:27 <kbsingh> we have aarch64 builder now, ppc8le coming very soon - but given that we already have content in the cloud-sig tag's in koji, how would these arch's 'catch up' ?
15:19:29 <number80> could we schedule a meeting or the infra one on monday will do?
15:19:30 <hrw> kbsingh: Arch as 'that other linux distribution called Arch' or is it other meaning?
15:19:59 <number80> hrw: architecture
15:20:29 <kbsingh> hrw: i mean like Architectures. x86_64 tag's have a boat load of content. Do we then need to find a way to bump spec ver:rel and rebuild for all ? again ? or is there a way to have a specific Architecture 'catch up' without needing a mass build of everything
15:20:56 <hrw> kbsingh: not played with koji internals
15:21:07 <kbsingh> number80: good point, we should get this question into the buildsys meeting on Monday
15:21:15 <mengxd_> i think many openstack non-arch rpms can be shared across archs
15:21:39 <kbsingh> mengxd_: true.
15:21:52 <hrw> the problem for other archs can be lack of official epel7 for them.
15:21:53 <mengxd_> the diff is the dependencies part
15:22:27 <mengxd_> such as python runtime libraries, which is not equally supported on different archs
15:22:34 <number80> hrw: we rebuilt in cloud7-openstack-common-el7-build tag EPEL missing bits
15:22:42 <hrw> number80: ok.
15:22:44 <kbsingh> hrw: we just build those locally in the cbs.centos.org koji - thats where we'll need to figure out howto do the architecture specific things
15:23:00 <number80> hrw: the issue is that we don't control updates in EPEL and we had broken deps or incompatible ones
15:23:02 <kbsingh> we dont need a solution right now, but i think we should make sure that some of us are in the buildsys meeting on monday to work it out there
15:23:20 <mengxd_> sounds good
15:23:25 <number80> kbsingh: setting a reminder, could you remind me the time?
15:23:34 <hrw> kbsingh: 'koji build --arch-override' probably
15:24:03 <kbsingh> #info buildsys meeting is at 14:00 UTC on Mondays
15:24:22 <number80> ack thank you
15:24:46 <number80> then, next topic
15:24:54 <number80> #topic OpenNebula Update
15:24:58 <number80> jmelis, jfontan ?
15:26:54 <number80> next topic,then
15:26:58 <number80> #topic NFV
15:27:14 <number80> I don't see dneary, any news from that side?
15:28:05 <number80> ok, let's move to the CI Cloud topic
15:28:23 <number80> #topic CentOS CI RDO Cloud
15:28:28 <dmsimard> o/
15:28:34 <number80> stage's yours :)
15:29:10 <kbsingh> dfarrell07: did you have any update re: opendaylight ?
15:29:11 <dmsimard> kbsingh and I have been trying to make use of a generous amount of hardware that we can turn into an OpenStack cloud to help ci.centos.org users leverage it for different workloads
15:29:23 <kbsingh> dfarrell07's got some good progress on having public visible builds for upstream releases
15:29:59 * kbsingh notes change in subject
15:30:15 <dmsimard> I have successfully deployed a (simple on purpose) multi-node OpenStack deployment, the work of which can be seen here https://github.com/dmsimard/centos-cloud
15:30:21 <number80> ah sorry
15:30:24 <number80> #undo
15:30:24 <centbot> Removing item from minutes: <MeetBot.items.Topic object at 0x4a99d50>
15:30:37 <number80> dmsimard: sorry, let's have kbsingh finish for NFV update
15:30:41 <dmsimard> aye
15:30:42 <kbsingh> I'm done :)
15:31:04 <number80> #info dfarrell07 is having good progress on publicly visible builds for upstream releases
15:31:05 <number80> thanks
15:31:07 <kbsingh> will ask dfarrell07 to email in something
15:31:14 <number80> *nods*
15:31:18 <number80> #topic CentOS CI RDO Cloud
15:31:31 <number80> dmsimard: you can continue and sorry for the disturbance :)
15:31:40 <dmsimard> it's okay
15:32:24 <dmsimard> so basically this new cloud deployment is already tested on top of the official CentOS OpenStack Mitaka release which I think is pretty cool.
15:32:51 <dmsimard> I did have some questions, relative to how we planned to expose this cloud for workloads
15:32:59 <dmsimard> To internal (and if we want to) external tenants
15:33:02 <trown> dmsimard: is the architectiure in that repo a starting point, or the intended finishing point?
15:33:40 <dmsimard> trown: I believe we want to add ceph and cinder but that was it -- the main objective being an intermediary point with low operational cost until the mythical, legendary RDO cloud comes along
15:34:01 <kbsingh> right
15:34:20 <trown> hmm, maybe heat too? pretty please :)
15:34:38 <dfarrell07> kbsingh: hey, sorry I'm in another meeting. We're building our three most recent ODL releases via Packer into Vagrant base boxes and containers. We're also working on consuming those in tutorials. I also need to dig into the mirror tips you sent me (not familiar at all with that space)
15:34:44 <kbsingh> as a baseline we just want cloud workloads for ci.c.o jobs - using a as basic as possible setup, no ovs for example, its a flat linuxbridge setup with not a lot of other backing bits
15:35:10 <number80> #info new CentOS CI RDO Cloud is based on Mitaka GA
15:35:34 <kbsingh> trown: i think if you were to send a PR against that repo, we can add it in :)
15:35:37 <dmsimard> trown: what do you need heate for ?
15:36:02 <number80> 3o inception
15:36:02 <trown> k, I will checkout the repo, and we can discuss in PR :)
15:36:12 <trown> ya OVB definitely needs heat
15:36:25 <dmsimard> ok, OVB wasn't in the MVP but we can discuss it :P
15:36:59 <trown> or as I like to call it "the project formerly known as quintupleo"
15:37:02 <dmsimard> I will eventually move that repository somewhere more fitting, I started it there while it was in WIP/POC mode
15:37:16 <kbsingh> for now, i think its fine where it is
15:37:28 <dmsimard> kbsingh: but back to how we will consume that cloud
15:37:31 <kbsingh> what we should do is setup a wiki page under wiki.centos.org/QaWiki/CI
15:37:35 <kbsingh> and explain some of the bits
15:38:08 <dmsimard> kbsingh: I was thinking one tenant per API key with a network allocation pool (/28 /27 ?)
15:38:20 <kbsingh> dmsimard: one thing we definitely want is for duffy to be able to do its thing, ie - req a machine, inject a key, hand it over
15:38:44 <dmsimard> you want to abstract openstack into duffy ?
15:38:52 <dmsimard> that's going to be a lot of work, you'll need to interface each and every feature ?
15:38:52 <kbsingh> dmsimard: beyond that, I think we ok to hand out tenant id's for folks who know what they are doing (eg. the cloud sig, maybe pass sig, some atomic sig folks )
15:39:18 <kbsingh> dmsimard: no - just the basic thing for duffy. Pre allocated size etc, request a node, get a node.
15:39:25 <dmsimard> ok
15:39:46 <kbsingh> anyone who needs any level of 'feature' like different instance size etc, can just get their own access and DIY
15:40:02 <dmsimard> How about public access ? There's horizon baked in but it's in the private network so not much use. Do we want public API access ?
15:40:11 <kbsingh> ideally not
15:40:17 <kbsingh> can you see a use for it ?
15:40:30 <kbsingh> ( btw, we did not intend to have a timeout or a reaper for VMs there )
15:40:52 <dmsimard> The new rpm factory workflow for RDO uses nodepool which can use any openstack cloud
15:41:13 <dmsimard> I don't know how much capacity we have there but at first glance we'll be throwing a LOT of stuff at it and might be resource constrained
15:41:25 <dmsimard> nodepool can be configured to, say, consume no more than 25 VMs at any given time
15:41:51 <kbsingh> lets assume we have no real floating ip's that are really public public
15:42:03 <kbsingh> so horizon would need to sit behind a proxy / lb
15:42:21 <kbsingh> that bit would be easy, how then would someone actually get access to their instances
15:42:32 <kbsingh> we might need glance and a few other bits as well
15:42:40 <dmsimard> ah, yeah, true -- we'd need a public subnet
15:42:47 <dmsimard> so I guess that's out of the question -- at least for now
15:43:19 <kbsingh> the only way i can think of is if we assume that its a user, and we set them up via the jump host, and expect suiteable ssh configs etc.
15:43:23 <kbsingh> might get messy
15:43:30 <dmsimard> kbsingh: re: timeout/reaper, it's always a requirement in an environment like that.. jobs can and will leak
15:43:57 <kbsingh> dmsimard: ack, noted.
15:44:06 <kbsingh> should be simpler to do than with baremetal anyway
15:44:19 <dmsimard> we already have a script that does something like that
15:44:26 <dmsimard> for other openstack clouds we run CI on
15:44:36 <dmsimard> I can find it, would just need to be tweaked a bit
15:45:15 <dmsimard> Sorry but I have to step out, feel free to continue without me -- I'll catch up later
15:45:53 <rbowen> We're almost at the end of the agenda, right?
15:46:02 <kbsingh> i think next step is to get the deployment up
15:46:06 <kbsingh> and have a few people play with it a bit
15:46:23 <kbsingh> I had planned on kicking off the first step ( ~ 10 machines ) today overnight,
15:46:33 <number80> rbowen yes
15:46:42 <kbsingh> will feedback to the list ( ci-users@centos.org : https://lists.centos.org/ ) with details
15:46:52 <kbsingh> humm i shold have #info'd some of these
15:47:17 <kbsingh> dmsimard thanks for your time and help on this
15:47:23 <rbowen> Would be good to go through the transcript afterwards and write a summary. This has been a very dense meeting.
15:47:33 <hrw> +1 for that
15:47:54 <hrw> I am new here and try to learn lot of things in short time ;d
15:48:05 <mengxd_> +1 here
15:49:17 <mengxd_> i hope i can learn from the team and contribute as soon as possible
15:49:42 <number80> +1
15:50:12 <number80> so should we close the meeting and let our Atomic friends prepare theirs?
15:50:45 <number80> Thank you for joining
15:50:51 <number80> #endmeeting