Earlier  
Posted Nick Remark
#openstack-placement - 2019-04-18
18:35:05 efried but probably better to fix it in the theme.
18:36:36 mriedem i'm harassing docs people in -dev
18:46:25 efried interestingly, when I build it locally, the table has a border.
18:47:29 melwitt tables without borders
18:47:41 efried aha
18:47:50 efried sphinx version makes the difference.
18:48:07 efried at 1.8.4, borders are there. At 2.0.1, gone.
19:00:17 mriedem prometheanfire: grenade-postgresql in stable/stein (upgrade from rocky to extracted placement) was ok https://review.openstack.org/#/c/653587/
19:00:29 mriedem no errors on the duplicate entry in the placement api when syncing traits,
19:00:38 mriedem and nova-compute on the stein side was able to report the new traits from stein
19:00:56 mriedem grenade uses postgresql-migrate-db.sh so i'm guessing that's where you went wrong - by not using it
19:10:32 mriedem i'll post a docs patch to hopefully clarify
19:14:44 openstackgerrit Matt Riedemann proposed openstack/placement master: Remind people to use postgresql-migrate-db.sh when migrating data https://review.openstack.org/653833
19:31:23 prometheanfire mriedem: ya, saw it
19:32:21 prometheanfire mriedem: I need to look through the tables and see if the other auto-incriment stuff needs fixing (probably does)
#openstack-placement - 2019-04-19
00:24:44 prometheanfire mriedem: for the tables with id sequences
00:24:46 prometheanfire SELECT setval('users_id_seq', (select max(id) + 1 from users where id is not NULL), FALSE);
00:25:13 prometheanfire max id will probably still work without the is not null, but paranoid
00:47:54 openstackgerrit Merged openstack/placement master: Add links to storyboard worklists to contributing.rst https://review.openstack.org/653719
13:30:12 edleafe So quiet today...
13:35:14 fried_rice apparently some rabbit hid some eggs
13:42:08 edleafe All this fertility imagery is disturbing
18:58:38 mriedem prometheanfire: is extracted placement packaged for gentoo now? i'm assuming yes.
19:11:31 mnaser mriedem: regarding placement extract stuff and tripleo, I saw Emilien this week at a meetup and asked him, and they haven't made progress on that afaik, I think the patch to use extracted placement was in merge conflict
19:12:27 mriedem hmm, i thought they were deploying fresh installs with extracted placement
19:12:49 mriedem lyarwood is out today and gerrit is down so i can't poke
19:22:02 prometheanfire mriedem: it is
19:22:24 mriedem prometheanfire: is there a link to a repo i can put in an etherpad?
19:23:22 mriedem packages.gentoo.org is really slow for me
19:24:06 prometheanfire sure
19:24:36 prometheanfire https://packages.gentoo.org/packages/sys-cluster/placement
19:24:48 prometheanfire not stable yet (stein isn't yet stable)
19:24:54 prometheanfire or marked as such
19:25:34 mriedem cool, thanks
19:26:06 prometheanfire yarp
19:26:50 prometheanfire https://github.com/gentoo/gentoo/blob/master/sys-cluster/placement/placement-1.0.0.ebuild is the actual package
#openstack-placement - 2019-04-20
04:56:38 prometheanfire https://github.com/gentoo/gentoo/blob/master/sys-cluster/placement/placement-1.0.0-r1.ebuild now
#openstack-placement - 2019-04-22
13:30:23 mriedem i'm assuming the placement meeting for today is cancelled yeah?
13:30:27 mriedem chris is out right?
13:42:25 efried mriedem: Chris is out, yes. Do we have a need for a meeting?
13:43:02 mriedem not that i know of
13:45:05 edleafe I was just about to ask the same thing
13:45:44 edleafe If you think there is anything that we could move forward on with a meeting, I'll be happy to run it. Otherwise, I'll be happy to skip it. :)
13:58:47 edleafe Official cancellation of the Placement meeting in 60 seconds unless someone objects...
14:00:00 edleafe OK, no Placement meeting today!
23:12:05 openstackgerrit Ghanshyam Mann proposed openstack/placement master: Dropping the py35 testing https://review.opendev.org/654650
23:13:32 openstackgerrit Ghanshyam Mann proposed openstack/os-traits master: Dropping the py35 testing https://review.opendev.org/654651
23:15:18 openstackgerrit Ghanshyam Mann proposed openstack/osc-placement master: Dropping the py35 testing https://review.opendev.org/654652
23:16:40 openstackgerrit Ghanshyam Mann proposed openstack/os-resource-classes master: Dropping the py35 testing https://review.opendev.org/654653
#openstack-placement - 2019-04-23
06:15:33 openstackgerrit Surya Seetharaman proposed openstack/placement master: [WIP] Spec: Support Consumer Types https://review.opendev.org/654799
08:14:46 tssurya cdent: good morning
08:15:18 tssurya let me know if you have some time, I had a couple of questions for the consumer types spec
10:35:01 sean-k-mooney o/
10:35:17 sean-k-mooney edleafe: efried are either of ye about at the moment?
10:35:28 sean-k-mooney quick question for ye.
10:36:06 sean-k-mooney i was talking to bauzas eairlier today and a question came up which bauzas is going to look into later
10:36:49 sean-k-mooney when we have nested resocue providers will placement return muliple allocation candiates for the same host
10:37:30 sean-k-mooney e.g. if i request rescource from both the root rp and a resouce that can be provide by either of two child rps will i get 2 allocation candiates
10:38:24 sean-k-mooney 1 with cpus/ram from the root and the other resocue form child RP1 and a second with the other resoucres form child rp2?
10:39:32 sean-k-mooney this is imporant for several of the numa/bandwith related feature but it also improtant for sharing resouce providers usecase too.
10:40:50 sean-k-mooney in the shareing case its not really about child resouce providers but more this is a member of 2 aggrates for differend shard disk providers. do i get an allcoation candiate for both possible configuration or jsut one of them
10:41:15 sean-k-mooney anyway if anyone know the answer to the above ^ let me know
10:47:15 gibi sean-k-mooney: placement returns multiple candidates per compute if possible. See for example the gabbit case https://github.com/openstack/placement/blob/931a9e124251a0322550ff016ae1ad080cd472f3/placement/tests/functional/gabbits/allocation-candidates.yaml#L602
10:48:15 sean-k-mooney gibi: ok that is good to know and is there a way to limit the candiate per host without limiting the total set
10:48:30 gibi sean-k-mooney: I don't think so it is possible
10:49:14 sean-k-mooney ok that might be useful as things get more nested
10:49:36 sean-k-mooney if you have a low limit set like cern has then it could be problemaic
10:50:48 sean-k-mooney they limit to like 20 or 50 allcoation candiates but if they get 5 allcotaions per host then thats only 10 hosts instead fo 50
10:50:57 sean-k-mooney or 4 i guess
10:52:31 gibi sean-k-mooney: they will know how nested their deployment will be so they can addjust the limit accordingly
10:52:48 sean-k-mooney not really
10:52:57 sean-k-mooney they set the limit for performacne reasons
10:53:37 sean-k-mooney so they cant increase it but we could ask placment to only return 1 candiate per host or 3 instead of all possible combinaitons
10:53:57 gibi OK, I see
10:54:14 sean-k-mooney just takign th eband with case if i install 2 4x10G nics in a host that give me 8 pfs
10:54:40 gibi alternatively we can change placement to return one candidate for each possible host before retuns the second candidate for the first host (e.g. orderd the candidates differently)
10:54:40 sean-k-mooney assuming all were on the same physnet and had capastiy left that would result in 8 allocation canidate
10:55:04 sean-k-mooney gibi: perhaps that has other issue too
10:55:19 sean-k-mooney namely numa affinity
10:55:46 sean-k-mooney i think this is a larger topic that is worth discussing with more people
10:55:49 gibi OK so probably one ordering will not be good for all case
10:56:02 gibi sean-k-mooney: agree about discussing it with others
10:56:28 sean-k-mooney ya i think this is just another pair of axis that we need to consider in the wider "richer request syntax" discussion
10:57:40 gibi sean-k-mooney: as an extra problem, nova only considers the first candidate per host
10:57:53 sean-k-mooney gibi: yes today.
10:58:32 sean-k-mooney if we want to do wigheing/filtering on allcoation candiate instead of hosts in the future that woudl change
10:58:55 gibi sean-k-mooney: I agree
10:59:51 sean-k-mooney any way i realised while talking to bauzas that we had assumed that there would be multiple allcoation candiates for the same host but i had never check
10:59:59 sean-k-mooney thanks for pointing out that test
11:00:09 sean-k-mooney im not sure if its also the case for nested
11:00:33 sean-k-mooney its definelty assertign the behavior for local and shareing providers
11:00:45 gibi sean-k-mooney: there are nested cases in the same file
11:00:46 sean-k-mooney so in theory nova should allready be handeling it
11:01:14 gibi sean-k-mooney: nova handling the multiple candidates per host by taking the first candidate
11:02:03 sean-k-mooney ah yes like these https://github.com/openstack/placement/blob/931a9e124251a0322550ff016ae1ad080cd472f3/placement/tests/functional/gabbits/allocation-candidates.yaml#L538-L557
11:03:11 gibi sean-k-mooney: yes
11:03:54 sean-k-mooney gibi: ya taking the first is fine until we start considering numa
11:04:36 gibi sean-k-mooney: yes, for numa affinity the nova scheduler should start understanding allocation candidates
11:05:00 sean-k-mooney actully we could have issue with tacking the first one for sharing providers also right?
11:05:23 sean-k-mooney actully maybe not

Earlier   Later