Posted | Nick | Remark | |
---|---|---|---|
#openstack-nova - 2018-10-03 | |||
17:06:47 | mriedem | yup, bug 1558880 | |
17:07:18 | melwitt | sean-k-mooney: ok, I don't know anything about that. if that spec scope is actually incomplete, then we need to decide how we deal with it. open another spec for this cycle to finish it or treat them as bugs | |
17:08:20 | sean-k-mooney | melwitt: proably reporpose the spec is the best way and jsut add the flavor exra specs and image metadata values that were orginaly proposed | |
17:08:55 | sean-k-mooney | melwitt: unless you think we can backport them in which case it could be a bug | |
17:09:47 | sean-k-mooney | backporting woudl be the only reason to make it a bug in my mind but its also addign new fuctionality e.g. tuning off numa affinty for pci devices | |
17:15:26 | sean-k-mooney | melwitt: i or stephen will repropsoe the spec | |
17:15:56 | sean-k-mooney | melwitt: stephenfin is heading home so i will likely do it later today | |
17:17:15 | melwitt | sean-k-mooney: I'd run the idea by mriedem too, in case he has another opinion on how to handle this | |
17:17:34 | openstackgerrit | Sylvain Bauza proposed openstack/nova master: libvirt: implement reshaper for vgpu https://review.openstack.org/599208 | |
17:17:54 | sean-k-mooney | melwitt: sure i just felt a little checky to sneak it in as a bug fix :) | |
17:18:56 | bauzas | dansmith: mriedem: I tested the vgpu reshape for allocations too and good news it works ! I just fixed a few things that I discovered when testing ^ | |
17:19:06 | melwitt | sean-k-mooney: yeah, you're probably right. I didn't think much about it | |
17:19:13 | bauzas | now, call it a day | |
17:23:58 | sean-k-mooney | spatel: where you able to use a multi numa node guest to spawn the instance. that funtionality definetly works | |
17:24:54 | spatel | Testing it now..should i add "hw:cpu_policy='dedicated'" too for pinning ? | |
17:25:02 | dansmith | bauzas: ack | |
17:25:17 | sean-k-mooney | spatel: yes if you want cpu pinning then add hw:cpu_policy='dedicated' | |
17:25:38 | spatel | doing it.. hold on.. soon report back | |
17:27:26 | sean-k-mooney | no rush | |
17:35:38 | spatel | sean-k-mooney: i am able to launch two VM with 10vCPU core each ( i have 32 core compute node with 16+16 numa) but look like it didn't pin down CPU | |
17:35:41 | spatel | check this out http://paste.openstack.org/show/731420/ | |
17:35:55 | spatel | I can see it pin CPU cross numa | |
17:37:00 | sean-k-mooney | spatel: can you run virs dumpxml <instance> | |
17:37:27 | sean-k-mooney | spatel: i think it pinned everythin correctly | |
17:37:57 | spatel | http://paste.openstack.org/show/731423/ | |
17:37:58 | sean-k-mooney | spatel: it looks like each | |
17:38:27 | spatel | I thought it should pin all vCPU core with same NUMA node CPU right? | |
17:38:40 | sean-k-mooney | no | |
17:38:44 | spatel | hmmmm? | |
17:39:10 | sean-k-mooney | by setting hw:numa_nodes=2 you will have half the cpus on one numa node and half on the other | |
17:39:23 | sean-k-mooney | memory will also be equally split | |
17:40:13 | sean-k-mooney | provided there is a pci device free on at least one of the 2 numa nodes assocaicated with the vm vcpus then we will allow the vm to boot | |
17:40:26 | spatel | if i remove numa_node=2 then it will pin down all vCPU on same node right? | |
17:40:57 | sean-k-mooney | correct addint hw:cpu_policy=dedicated implictly adds hw:numa_nodes=1 | |
17:41:25 | spatel | hmm! interesting.. | |
17:41:53 | spatel | using numa_node=2 will have some performing issue right? | |
17:41:53 | sean-k-mooney | by explcitly setting hw:numa_nodes=2 it will allow both numa nodes on the host to be used but it will also limit the vm to host with 2+ numa nodes | |
17:42:25 | sean-k-mooney | spatel: it can if the application in the gust itself does not understand numa affinity | |
17:43:02 | sean-k-mooney | it can also improve the performacne as you doubles your memory bandwith as the vm will now use memroy form 2 host numa noes/memory controlers | |
17:43:26 | spatel | I think time to run some test... | |
17:43:38 | spatel | We are media company and using VoIP base application | |
17:43:57 | sean-k-mooney | testing is always a good idea :) | |
17:44:19 | spatel | First i build openstack without SR-IOV and found performance was horrible (PPS rate was only 50k after that it start dropping packets) | |
17:44:20 | sean-k-mooney | as a comunity we have done a lot of work to improve numa affinity over the years | |
17:45:14 | spatel | I have just started learning numa stuff so i am new but it looks interesting.. | |
17:45:18 | sean-k-mooney | spatel: the stict pci numa affinity was added for telco usescases wehre they could not tolerate cross numa pci/sriov | |
17:45:56 | sean-k-mooney | spatel: it certenly is .... interesting. its also a pain in the ass but give better performace when you get it right | |
17:46:11 | spatel | I have some legacy hardware and i have to stick to them | |
17:46:27 | spatel | other side i am planning to test DPDK if its better | |
17:46:31 | sean-k-mooney | numa is not going away infact its become more common | |
17:46:44 | sean-k-mooney | dpdk is much better then kernel ovs | |
17:46:51 | sean-k-mooney | but its more complicated too | |
17:47:05 | spatel | but it doesn't need hardware dependency atleast | |
17:47:37 | spatel | I spent thousand of $$$$ to get SR-IOV supported card | |
17:47:38 | sean-k-mooney | spatel: not in the same way it requires that the guests use hugepages and that there is a dpdk driver for your nic | |
17:48:19 | spatel | Does it perform like SR-IOV ? | |
17:48:22 | sean-k-mooney | spatel: ya dpdk will be cheaeper in that sense but you will have to dedicate 1-2 cores to handel trafic for ovs-dpdk | |
17:48:50 | sean-k-mooney | spatel: in some cases yes. in general not quite | |
17:49:14 | sean-k-mooney | what data rates / traffic profiles are you targeting? | |
17:49:31 | spatel | currently i am deploying VoIP application on 1U server with 32 core / 32G memory. and i have 1000 servers... | |
17:49:34 | sean-k-mooney | 10G small packets 40G jumbo frames? a mix | |
17:50:01 | spatel | my peak in production 200 to 230kpps UDP packet rate | |
17:50:21 | sean-k-mooney | on well dpdk can handel that easilly | |
17:50:30 | spatel | really??? | |
17:50:46 | spatel | if that is the case then it will be win win solution | |
17:51:09 | sean-k-mooney | ya dpdk was desinged to hit 10G line rate with 64byte packets which is 14mpps | |
17:51:26 | spatel | we have lots of server in AWS (with sr-iov) support | |
17:51:40 | spatel | that is really cool! | |
17:51:46 | sean-k-mooney | with the right hardware it can hit 32mpps on a singel core but in generall you will see more like 6mpps | |
17:52:01 | spatel | we are using LinuxBridge + VLAN so i need to upgrade to OVS | |
17:52:18 | sean-k-mooney | its mroe or less like this | |
17:52:43 | sean-k-mooney | lb<ovs<sriov+macvtap<ovs-dpdk<sriov direct | |
17:53:08 | spatel | I have tried macvtap but that didn't work too | |
17:54:39 | sean-k-mooney | spatel: checkout https://dpdksummit.com/Archive/pdf/2016USA/Day02-Session04-ThomasHerbert-DPDKUSASummit2016.pdf slides 16-19 | |
17:55:21 | spatel | reading.. | |
17:56:04 | sean-k-mooney | spatel: im alittle biased as im one of the people that added ovs-dpdk support to openstack but for your data rate i think it would work quite well | |
17:56:42 | spatel | I need to find out how to migrate LinuxBridge to OVS | |
17:57:15 | sean-k-mooney | spatel: today cold migrate works. im working on fixing livemigrate | |
17:57:31 | spatel | cool! | |
17:57:40 | spatel | in SR-IOV i am not able to get that function too | |
17:57:48 | spatel | even bonding isn't supported | |
17:57:55 | sean-k-mooney | live migrate almost works we just dont update the bridge name correctly im hoping to backport that | |
17:58:27 | spatel | nice! if that work | |
17:58:49 | openstackgerrit | Surya Seetharaman proposed openstack/nova master: Add scatter-gather-single-cell utility https://review.openstack.org/594947 | |
17:58:50 | openstackgerrit | Surya Seetharaman proposed openstack/nova master: Return a minimal construct for nova list when a cell is down https://review.openstack.org/567785 | |
17:58:50 | openstackgerrit | Surya Seetharaman proposed openstack/nova master: Modify get_by_cell_and_project() to get_not_qfd_by_cell_and_project() https://review.openstack.org/607663 | |
17:58:52 | sean-k-mooney | spatel: haha i think im working on all your missing features :) https://review.openstack.org/#/c/605116/ | |
17:59:14 | openstackgerrit | Merged openstack/nova stable/pike: nova-manage - fix online_data_migrations counts https://review.openstack.org/605840 | |
17:59:25 | spatel | :) | |
17:59:51 | spatel | i have lots of requirement :) these are just starting | |
18:00:02 | spatel | sean-k-mooney: thanks for help!!! | |
18:00:02 | sean-k-mooney | my main focus this realease at least initally is livemigraton hardenign. e.g fixing edgecase like lb->ovs or sriov | |
18:00:25 | nicolasbock | <freenode_sea "nicolasbock: if you are still ar"> Thanks for the tip! | |
18:00:26 | spatel | i didn't know freenode will be very helpful.. last 2 days i am chasing google | |
18:01:12 | spatel | what you use to deploy your openstack? I am using openstack-ansible | |
18:01:14 | sean-k-mooney | spatel: no worries. im usually hear so feel free to ping me if you have issues | |
18:01:44 | spatel | I am going to spend next 6 month here :) until my cloud is ready!! | |
18:01:53 | sean-k-mooney | spatel: for developement devstack. i used to use kolla-ansible but recently joined redhat so i proably shoudl suggest OSP | |
18:02:07 | spatel | we spent million dollar last year in AWS so my boss want to build own AWS :) | |
18:02:39 | sean-k-mooney | spatel: that is how alot of compaines end up running opnestack clouds yes |