16:02:38 #startmeeting CentOS Atomic SIG 16:02:38 Meeting started Thu Feb 25 16:02:38 2016 UTC. The chair is jbrooks. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:38 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:02:50 #chair kbsingh 16:02:50 Current chairs: jbrooks kbsingh 16:02:51 kushal: hmm, let me verify 16:03:20 #topic rebuild status 16:04:08 I had an action last week to do some test builds once we got our pkgs squared away, we did, and I did, and here are those test builds: https://ci.centos.org/artifacts/sig-atomic/downstream/images/ 16:04:16 kbsingh, Where are we on the next step? 16:04:42 we have the builds done, ready to release - the ami's are the last piece pending there 16:05:06 we had an infra outage thats taken me out a bit, or we'd have done these last night 16:05:26 Ah, OK, I'll prep the announcements, you think we'll have them out today? 16:05:28 the amis 16:05:36 yup 16:05:49 hey all 16:05:57 #chair dustymabe 16:05:57 Current chairs: dustymabe jbrooks kbsingh 16:06:05 Hey Dusty 16:06:08 How do we test these .box files? 16:06:18 kushal, for atomic? 16:06:23 jberkus, Yes 16:06:34 jbrooks, Yes 16:06:37 I run http://www.projectatomic.io/blog/2015/09/clustering-atomic-hosts-with-kubernetes-ansible-and-vagrant/ 16:06:38 in vagrant 16:06:55 So vagrant to kube cluster to running atomic app 16:07:19 kbsingh, jbrooks, Okay 16:07:25 #action jbrooks to prep announcement, sent out once images are ready 16:07:35 #action kbsingh to prep centos atomic amis 16:07:38 I am just thinking if we can reuse the tests we wrote in Fedora land for the atomic images 16:07:41 kicking that job now 16:07:56 kushal: that would be good 16:08:13 kushal, Yeah, the next thing I'm going to do after release is put my test into a script, I do want to align w/ fedora on testing 16:08:16 jbrooks, and also add the kube tests 16:08:56 kushal, Are these the same tests from the atomic test day? 16:09:04 #chair kushal 16:09:04 Current chairs: dustymabe jbrooks kbsingh kushal 16:09:12 jbrooks, plus many more, from different regression tests etc 16:09:21 Can you paste a link? 16:09:33 jbrooks, yes, looking 16:09:41 jbrooks, Few new tests are yet to be activated 16:10:13 * jbrooks keeps meaning to work on tests and then getting distracted by release stuff 16:10:34 https://apps.fedoraproject.org/autocloud/jobs/1643/output 16:10:36 i am trying to fix that with more automation, but just need that breathing space to make it happen 16:10:44 ( that = release ) 16:10:48 * lalatenduM is here 16:10:55 #chair lalatenduM 16:10:55 Current chairs: dustymabe jbrooks kbsingh kushal lalatenduM 16:11:17 OK, anything else on the rebuild? 16:11:23 btw, there were some vagrant tweaks that lalatenduM and theta3 were working on for the distro box's - do we want those to atomic as well ? 16:11:30 atleast the timer one looked relevant 16:11:39 kbsingh: yes 16:11:52 may be jbrooks will take a look 16:11:55 kbsingh, Right, do we have links to that? 16:12:08 I'll take a look at them 16:12:10 * dharmit present, sir. ;-) 16:12:21 jbrooks: https://github.com/CentOS/sig-cloud-instance-build/pull/40 16:12:23 I am downloading https://ci.centos.org/artifacts/sig-atomic/downstream/images/centos-atomic-host-7-vagrant-libvirt.box and then make sure that we have all tests running on this properly. 16:12:27 #chair dharmit 16:12:27 Current chairs: dharmit dustymabe jbrooks kbsingh kushal lalatenduM 16:12:34 jbrooks: and there is #39 16:12:46 dharmit: good to see you :) 16:13:10 #action jbrooks to look at PR 39 and 40 in https://github.com/CentOS/sig-cloud-instance-build 16:13:14 kushal: minutes under https://www.centos.org/minutes/2016/february/ 16:13:30 https://www.centos.org/minutes/2016/february/centos-devel.2016-02-25-15.11.html 16:13:30 Arrfab, Thanks for verifying :) 16:13:42 lalatenduM: Likewise. Should happen for real next week when I visit BLR. :) 16:14:06 jbrooks: ami's building. note that they will be published and promoted from a different account this time 16:14:20 kbsingh, right, you gave me a heads up about that 16:14:46 Ok, next topic 16:14:50 #topic ADB 16:14:57 lalatenduM, Any updates here? 16:15:25 jbrooks: I ma going to build sshfs for ADB 16:15:37 also openshift2nulecule 16:15:38 For the sync function? 16:15:56 Ah, do you have a link for that second bit? I haven't heard of it 16:16:07 jbrooks: https://github.com/projectatomic/openshift2nulecule 16:16:29 it has dependancies on some python packages 16:17:09 cool 16:17:18 Do we have those deps? 16:17:40 jbrooks: not sure , we might have to get them from epel 16:18:32 Ok, anything else on the ADB? 16:18:48 On CI for ADB front, I just got my CentOS CI account this week and am playing around with it. At the moment looking for where do I add gh token so that a commit to a repo can trigger a build. I'll sync up with bstinson for my needs. Looking forward to meet the guys for real next week. :) Specifically I am working on this issue - https://github.com/projectatomic/adb-atomic-developer-bundle/issues/195 16:18:52 kbsingh: I know you guys have discussed epel any progress on how we dont have to rebuild stuff from epel 16:19:09 discussed epel in Fosdem* 16:20:00 dharmit: cool, lets sync between you me tomorrow on this 16:20:07 lalatenduM: ack. 16:20:19 jbrooks: thats it I guess 16:20:34 OK 16:20:39 #topic Open Floor 16:21:05 Any open items? Do we have fcami__ 16:22:12 He was looking at ceph tests for atomic -- I'll follow up w/ him later 16:23:13 jbrooks: thanks for all of your hard work! 16:23:25 lalatenduM: no progress - epel content will need to be rebult in cbs to use 16:23:32 dustymabe, :) 16:23:46 kbsingh: ok 16:23:56 lalatenduM: there is a conversation ongoing in the epel lists aronud where / what they model on - and it might be good to get a voice there for cbs collaboration 16:24:04 kbsingh, Could there perhaps be auto-rebuild magic for epel-cbs? 16:24:42 jbrooks: the aim is to not have that at all, we could potentially fork it - and then offer up all the content for tag's 16:24:52 but it would be nicer if epel came along on their own really 16:25:07 The idea is for them to choose to move to the CBS? 16:25:10 kbsingh: will do 16:25:20 dharmit: catch me for the vagrant testing, i've done some work on that in the past - and did a few more things this last week 16:25:58 dharmit: there is a lot of potential for deps there ( in a good way ) to validate changes, but also to validate payloads impacted by the changes and inherited from other changes ( eg. checking your stuff when upstream vagrant evolves ) 16:26:10 kbsingh: is there also a gluster driver for atomic storage? 16:26:15 jbrooks: SIgs need a way to not have an exernal buld break stuff really... 16:27:02 jberkus: in what way ? gluster itself is just a filesystem 16:27:20 jberkus: or do you mean libgfapi(!), for the gluster hosted object 16:27:40 not sure, I don't really know much about gluster operationally 16:27:44 jberkus, the current centos atomic host does include glusterfs-client 16:27:53 jbrooks: ok, thanks 16:28:00 kbsingh: Sure thing. Thanks! :-) 16:28:12 for the block device side of things, ceph is in there - and gluster is present only as the filesystem client i believe 16:28:39 tbh, i am not entirely sure how one might use the gluster-client here though. maybe with overlayfs ? 16:28:48 I believe, though, that libgfapi lives in the app that uses it, not in a gluster client app 16:29:03 glusterfs-fuse 16:29:07 I hope we're not using overlayfs 16:29:14 The "native" gluster mount type 16:29:35 I think nfs works better than gluster-fuse 16:29:45 Well, that depends on various things 16:30:20 jbrooks: yes, hence my question - what exactly would be the usecase for gluster inside the host 16:30:38 kbsingh, #1 thing is kube persistent volumes 16:30:43 i can see gluster backed storage to host the vm backing storage, but that should be largely transparent to the host itself ( entirely ) 16:30:44 jbrooks, kbsingh I found one issue with Fedora tests, they are in Python3 :( 16:30:46 kbsingh: mostly you need it in the containers 16:31:01 I guess I can bypass that. 16:31:09 * kushal goes back to his test system. 16:31:16 or more accurately, gluster-backed volumes for the containers 16:32:04 jbrooks: I am interested to try out the use case , do you have a pointer for me 16:32:19 jberkus: yeah, so there - the filesystem makes sense. 16:33:04 lalatenduM: expanding on what jberkus said : imagine CentOS atomic host running a wordpress app in a container, with the /var/lib/www in the container being a glusterd hosted filesystem 16:33:29 kbsingh: yeah I was guessing that 16:33:31 so the Atomic Host instance would need the gluster client, so that docker can -v mount it inward 16:33:39 kbsingh: or closer to my heart: Postgres/Maria/Redis running in a contianer, and DB storage being a gluster volume 16:34:04 kbsingh: jbrooks question , you you guys tried nfs mount for gluster 16:34:15 lalatenduM, I haven't 16:34:19 nfs mount? augh 16:34:27 I've used it w/ ovirt 16:34:31 gluster's nfs 16:34:33 the nfs bit might actually be faster for db like ops ( as long as nfs itself didnt get in the way ) 16:34:54 the bad thing about nfs is you're tied to one gluster node 16:34:55 I think this pattern needs a bit of airtime to thrash out 16:34:56 jbrooks: jbrooks I think gluster nfs works better than the fuse 16:35:09 also there was integration with nfs-ganesha 16:35:17 jbrooks: does gluster nfs make better guarantees about writes than standard nfs? 16:35:17 which provided pnfs 16:35:28 provide* 16:35:33 lalatenduM, you need to provide for the load balancing on your own, but I haven't used the ganasha stuff yet 16:35:46 because it might actually be better to use a backing block device and create the filesystem locally, then HA or perf tune that backing block device instead ( and not care about the filesys ) 16:36:12 jberkus, I don't know about that, I know that some say the perf is better 16:36:23 I think it depends on many factors 16:36:25 naturally 16:37:10 jbrooks: well, for reference, in standard Linux NFS write ordering is not guaranteed, which is death for databases 16:37:28 And it's not regular kernel nfs, it's gluster's own nfs implementation 16:37:28 jbrooks: ami in us-east-1 done, email coming shortly 16:37:32 cool 16:37:33 jbrooks: running the test cases once 16:38:07 jbrooks: thats right its gluster's own nfs implementation 16:38:29 ok 16:38:48 jberkus, I think I've run postgres on gluster, but with postgres in a VM that's hosted from gluster 16:39:01 I mean, I know I've run and am currently running it that way 16:39:03 jbrooks: yeah, wouldn't recommend that 16:39:20 some day in My Ample SPare Time I need to performance test PG on GLuster 16:39:49 Works fine for where I'm using it -- I've actually been using it that way w/ ovirt for the past few years -- it's probably slower than it could be 16:40:00 I'm going to close the meeting 16:40:01 yeah, I'm thinking of performance 16:40:03 #endmeeting