Getting Actual RAM Use on KVM Node


SUBMITTED BY: Guest

DATE: Sept. 25, 2019, 8:58 a.m.

FORMAT: Text only

SIZE: 18.5 kB

HITS: 463

  1. Getting Actual RAM Use on KVM Node
  2. decided to consolidate all my VPS as I had then scattered throughout a bunch of different providers. I grabbed a dedicated server and setup with Virtualizor and KVM.
  3. ++++++++++++++
  4. If You want to buy cheap web hosting then visit http://Listfreetop.pw and select the cheapest hosting. it can be suitable for all your needs.
  5. Top 200 best traffic exchange sites http://Listfreetop.pw/surf
  6. list of top gpt sites
  7. list of top ptc sites
  8. list of top ptp sites
  9. list of top crypto currency Wallets sites
  10. Listfreetop.pw
  11. Listfreetop.pw
  12. Listfreetop.pw
  13. +++++++++++++++
  14. There are currently 37 VPS running on it, majority of them with 512MB Ram and 1024MB swap. Virtualizor reports 98% ram and 65% SWAP used, which alarmed me so I did some checking.
  15. free -m reports
  16. Code:
  17. [root@raptor ~]# free -m
  18. total used free shared buff/cache available
  19. Mem: 31698 30314 539 114 844 755
  20. Swap: 11996 7283 4713
  21. But if I open up top and take a snap of all the VMs running and add up what they are using for RAM it comes up at around 60%.
  22. https://imgur.com/a/PdFEQLj
  23. I still have more VPS that I need to transfer onto this server so I am hoping what I am seeing is the OS itself is reporting the allocated ram for all VPS and not amount actually being used. If so is there a better way to get a reliable snapshot of actual ram being used? I don't want to overload this server and have issues with it.
  24. Thanks!
  25. Adding up all the KVM mem I get 85% used. Then add in what ever is used for the OS and I think 98% used is right.
  26. In case it wasnt known, swap space assigned to KVM's has nothing to the do with the swap numbers in the OS's "top" command. But If the OS swap is being used a good bit, you need more ram.
  27. I think you're 1 or 2 busy VMs away from full swap space and then a crash.
  28. Thanks for the information.
  29. I can't move any VM off of this machine so for now I will need to increase swap. Any tips for doing this on a live server like this?
  30. Yea, you can add swap space live via a swap a file. Just create a random file with DD. Then use swapon command to add that file.
  31. Here's some good info no how.
  32. https://support.rackspace.com/how-to...nux-swap-file/
  33. Upgrading to 64 GB of RAM would be best as swap requires disk IO and thus would perform slow and slow down disk IO even on SSDs .
  34. -Steven | u2-web@Cooini, LLC - Business Shared Hosting | Isolate sites with Webspaces | Site Builder | PHP-FPM | MariaDB
  35. WHMCS Modules: Staff Knowledgebase | Custom Modules and Hooks
  36. "It is the mark of an educated mind to be able to entertain a thought without accepting it" -Aristotle
  37. I know how to create a swap file, there is already one active. What I am not sure on is how to unmount and create a larger swap file and mount it without the server blowing up
  38. e host jason
  39. lucky 7 make money
  40. www.legagnant.com
  41. hosting quickbooks in azure
  42. trafficflying.com
  43. films let's-make-money
  44. q significa make money en ingles
  45. k s hosting apk
  46. freemonthlywebsites2.com
  47. domain examples
  48. Upgrading to 64 GB of RAM would be best as swap requires disk IO and thus would perform slow and slow down disk IO even on SSDs .
  49. Unfortunately that is not an option so I will have to make due best I can with what I have.
  50. I know how to create a swap file, there is already one active. What I am not sure on is how to unmount and create a larger swap file and mount it without the server blowing up
  51. 1. You probably have a swap parition if its the one created when installing the OS.
  52. 2. You dont unmount. Just add the new swap file and it will use both swaps. Can even have multiple swap files or partitions across multiple disks to spread the load.
  53. Or Simpler, If you have disks where you can create new partitions, create the partition, format it as swap, then swapon them all.
  54. Thanks for the information.
  55. I can't move any VM off of this machine so for now I will need to increase swap. Any tips for doing this on a live server like this?
  56. No. No. No. A thousand times - no.
  57. Adding swap is not going to solve any problems, because the problem is that you're using swap space at all.
  58. You should never see more than about 5 to 10% swap usage and then only very briefly. More than that means you have memory management (memory leak somewhere) or memory starvation problems that need to be fixed.
  59. Add RAM. You cannot fix problems by adding swap space because swap should never ever ever ever ever be used for more than a few split seconds at a time.
  60. Seeing 98% RAM usage is fine. Normal. Linux Kernel memory policy is that unused memory is wasted memory so it's always used - that's what the buffers/cache is showing in your free -m output.
  61. Seeing 65% SWAP usage however... that should be terrifying, because there are only two reasons a machine starts swapping and both of them are Bad News.
  62. Add RAM. If you cannot add RAM, then you need to reduce the number of virts, or the amount of RAM they are provisioned with.
  63. Do not bother to add swap space. Having more swap space is not going to fix the problem because using swap space is the problem.
  64. Last edited by SneakySysadmin; 12-04-2018 at 05:34 PM.
  65. Daddy? How was vi born?
  66. Well son, first cat and echo fell in love...
  67. No. No. No. A thousand times - no.
  68. Adding swap is not going to solve any problems, because the problem is that you're using swap space at all.
  69. You should never see more than about 5 to 10% swap usage and then only very briefly. More than that means you have memory management (memory leak somewhere) or memory starvation problems that need to be fixed.
  70. Add RAM. You cannot fix problems by adding swap space because swap should never ever ever ever ever be used for more than a few split seconds at a time.
  71. Seeing 98% RAM usage is fine. Normal. Linux Kernel memory policy is that unused memory is wasted memory so it's always used - that's what the buffers/cache is showing in your free -m output.
  72. Seeing 65% SWAP usage however... that should be terrifying, because there are only two reasons a machine starts swapping and both of them are Bad News.
  73. Add RAM. If you cannot add RAM, then you need to reduce the number of virts, or the amount of RAM they are provisioned with.
  74. Do not bother to add swap space.
  75. What are the 2 reasons? I would assume one is the system is actually running out of RAM?
  76. What are the 2 reasons? I would assume one is the system is actually running out of RAM?
  77. Mentioned in the post:
  78. 1) memory management problem, which usually means a memory leak of some kind (Postgres I'm looking very sternly your way, buddy). Memory that gets "used" by a process and then never returned to the kernel. Finding and fixing memory leaks is a major PITA and beyond the scope of a message board since what you're looking for as a culprit could literally be anything.
  79. 2) memory starvation - this one is simple, you've just got Too Much Stuff running and something has to give.
  80. I cannot stress enough that swap space is not for regular use. It is a safety net, or more accurately a pressure release valve. It's there so the kernel has a way of shuffling pages of memory around even if the machine is otherwise exhausted on RAM. (That's why it's called swap in the first place in case you were wondering)
  81. When swap usage starts to increase - you've got problems and increasing the amount of swap space is not going to fix that problem, because it shouldn't be using it in the first place.
  82. Edit: Keep in mind - for a KVM host node I would expect to see swap usage, since it's a built-in way for the kernel to move large chunks of memory in and out of RAM as needed.
  83. But 65%? Is way too much. Something's broke - go fix it.
  84. Last edited by SneakySysadmin; 12-04-2018 at 05:56 PM.
  85. Daddy? How was vi born?
  86. Well son, first cat and echo fell in love...
  87. Thanks everybody for the help and suggestions, I have found out a few things:
  88. 1.) I had forgot to update the OS after installing Virtualizor and there was a ton of updates (499) many of them for kvm / libvirt. After installing these and rebooting which went bad (see below) swap has remained near 0 and RAM se also dropped a fair bit.
  89. 2.) Virtualizor support has shown me how to see just how much actual ram is available on the system as the GUI, free -m, and top do not reflect correct values for this.
  90. I rebooted and the server did not come back, so into ipmi I go and find "grub" waiting for me with open arms, seems like it could not find grub.cfg (I think). The following commands finally allowed it to boot:
  91. Code:
  92. insmod mdraid09
  93. set root=(md/md2)
  94. linuxefi /boot/vmlinuz-3.10.0-957.1.3.el7.x86_64 root=/dev/md2 rd.auto=1 net.ifnames=0 biosdevname=0
  95. initrdefi /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img
  96. boot
  97. That is booted into latest kernel. I have had a bunch of servers for years and this is the first time I have had yum update break a server, now I am not sure what will happen on reboot, maybe you guys could help with that. /boot/grub2/grub.cfg in some places reflects new kernel and in some places it does not:
  98. Code:
  99. ### BEGIN /etc/grub.d/10_linux ###
  100. menuentry 'CentOS Linux (3.10.0-957.1.3.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-862.14.4.el7.x86_64-advanced-f09961d9-5455-4567-b7ac-72cf6f7c4ee0' {
  101. load_video
  102. set gfxpayload=keep
  103. insmod gzio
  104. insmod part_gpt
  105. insmod part_gpt
  106. insmod part_gpt
  107. insmod diskfilter
  108. insmod mdraid09
  109. insmod ext2
  110. set root='mduuid/3b55eb45923546c6a4d2adc226fd5302'
  111. if [ x$feature_platform_search_hint = xy ]; then
  112. search --no-floppy --fs-uuid --set=root --hint='mduuid/3b55eb45923546c6a4d2adc226fd5302' f09961d9-5455-4567-b7ac-72cf6f7c4ee0
  113. else
  114. search --no-floppy --fs-uuid --set=root f09961d9-5455-4567-b7ac-72cf6f7c4ee0
  115. fi
  116. linuxefi /boot/vmlinuz-3.10.0-957.1.3.el7.x86_64 root=/dev/md2 ro crashkernel=auto rhgb quiet vga=normal nomodeset rd.auto=1 rd.md.uuid=3b55eb45:923546c6:a4d2adc2:26fd5302 rootdelay=10 rootdelay=10 noquiet nosplash net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
  117. initrdefi /boot/initramfs-3.10.0-957.1.3.el7.x86_64.img
  118. }
  119. menuentry 'CentOS Linux (3.10.0-862.14.4.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-862.14.4.el7.x86_64-advanced-f09961d9-5455-4567-b7ac-72cf6f7c4ee0' {
  120. load_video
  121. set gfxpayload=keep
  122. insmod gzio
  123. insmod part_gpt
  124. insmod part_gpt
  125. insmod part_gpt
  126. insmod diskfilter
  127. insmod mdraid09
  128. insmod ext2
  129. set root='mduuid/3b55eb45923546c6a4d2adc226fd5302'
  130. if [ x$feature_platform_search_hint = xy ]; then
  131. search --no-floppy --fs-uuid --set=root --hint='mduuid/3b55eb45923546c6a4d2adc226fd5302' f09961d9-5455-4567-b7ac-72cf6f7c4ee0
  132. else
  133. search --no-floppy --fs-uuid --set=root f09961d9-5455-4567-b7ac-72cf6f7c4ee0
  134. fi
  135. linuxefi /boot/vmlinuz-3.10.0-862.14.4.el7.x86_64 root=/dev/md2 ro crashkernel=auto rhgb quiet vga=normal nomodeset rd.auto=1 rd.md.uuid=3b55eb45:923546c6:a4d2adc2:26fd5302 rootdelay=10 rootdelay=10 noquiet nosplash net.ifnames=0 biosdevname=0
  136. initrdefi /boot/initramfs-3.10.0-862.14.4.el7.x86_64.img
  137. }
  138. Do I need to be changing instances of "vmlinuz-3.10.0-862.14.4.el7.x86_64" to "vmlinuz-3.10.0-957.1.3.el7.x86_64" ?
  139. I had a look through the logs as well to try and figure out what exactly happened but I did not find anything.
  140. 2.) Virtualizor support has shown me how to see just how much actual ram is available on the system as the GUI, free -m, and top do not reflect correct values for this.
  141. Actually they do - you just need to know how to interpret what they're showing you.
  142. Just remember that as far as the Linux kernel is concerned, any unused RAM is wasted RAM, which is why top and free and etc all show nearly 100% RAM usage all the time. That's perfectly normal. You need to look at your buffers+/-cache usage to see how much RAM your kernel is ready to make available to processes should any of them ask for it. If they're not asking for that RAM tho - the kernel is going to use it for its own jobs.
  143. Note also the greatly diminished swap usage after reboot could be improved memory management from the upgrades...
  144. ... or it could also mean that by rebooting you released the memory that some process somewhere has been hogging all along due to a memory leak. Only time will really tell though.
  145. Daddy? How was vi born?
  146. Well son, first cat and echo fell in love...
  147. For grub configuration, you could use:
  148. grub2-mkconfig --output=/boot/grub2/grub.cfg
  149. That should make a new grub configuration with installed kernel. Check the grub configuration before starting though.
  150. -Steven | u2-web@Cooini, LLC - Business Shared Hosting | Isolate sites with Webspaces | Site Builder | PHP-FPM | MariaDB
  151. WHMCS Modules: Staff Knowledgebase | Custom Modules and Hooks
  152. "It is the mark of an educated mind to be able to entertain a thought without accepting it" -Aristotle
  153. Wow, 37 VPS and more VPS to be created? What kind of system do you have?
  154. Rebooting will refresh the ram but if each VPS will start using those apps again, you will surely see increase on ram usage again.
  155. Specially 4 You ||| Elevate Your Sites
  156. For grub configuration, you could use:
  157. grub2-mkconfig --output=/boot/grub2/grub.cfg
  158. That should make a new grub configuration with installed kernel. Check the grub configuration before starting though.
  159. Thanks fr that, it gave me a boot in the right direction. Did a bunch of reading and it is not as complicated as I thought.
  160. Wow, 37 VPS and more VPS to be created? What kind of system do you have?
  161. Rebooting will refresh the ram but if each VPS will start using those apps again, you will surely see increase on ram usage again.
  162. Ram use has stayed steady since the reboot, swap used a little 13%. I don't see why 37 VPS would be an issue when most of them are provisioned with 512MB ram and use only a small piece of it. The system has 32GB I realize there is some overhead but is it that much?
  163. VirMan
  164. Every VPS with your new server if you set 1024 MB Ram for every VPS you will cut it from the total ram the 32 GB Ram So you have to create 32 of the VPS's with 1024 MB Ram and this depends also on the amount of HDD or SDD you have with your new server.
  165. Every VPS with your new server if you set 1024 MB Ram for every VPS you will cut it from the total ram the 32 GB Ram So you have to create 32 of the VPS's with 1024 MB Ram and this depends also on the amount of HDD or SDD you have with your new server.
  166. Yes but you can over commit. Lets say I have 30 VPS at 1GB each but on average they use only 512, that leaves you at 15GB commited but not used so I see no reason not to use it. You just need to leave enough space incase some of them get busy and start using more then their 512MB average.
  167. Yes but you can over commit. Lets say I have 30 VPS at 1GB each but on average they use only 512, that leaves you at 15GB commited but not used so I see no reason not to use it. You just need to leave enough space incase some of them get busy and start using more then their 512MB average.
  168. My advise : If you have some of VPS's that doesn't use so much Ram you can reduce the allowed Ram or recreate it after backing up your vps's , And you can increase the Ram once needed !
  169. I think this is fair as in KVM control panel such as Virtualizor or SolusVM doesn't allow anyone to reduce the memory if you use KVM.
  170. Yes but you can over commit. Lets say I have 30 VPS at 1GB each but on average they use only 512, that leaves you at 15GB commited but not used so I see no reason not to use it. You just need to leave enough space incase some of them get busy and start using more then their 512MB average.
  171. And now we know why your Swap usage was so high.
  172. This is where the Delicate Crystal Goblet of Theory meets the Raging Doom Hammer of Practice.
  173. If you overcommit resources, you will run out of resources. There is no "incase". It is guaranteed to happen. This means you will eventually start to see more and more swap usage and consequently severely degraded performance as a result.
  174. Last edited by SneakySysadmin; 12-06-2018 at 01:34 PM.
  175. Daddy? How was vi born?
  176. Well son, first cat and echo fell in love...
  177. Ram use has stayed steady since the reboot, swap used a little 13%. I don't see why 37 VPS would be an issue when most of them are provisioned with 512MB ram and use only a small piece of it. The system has 32GB I realize there is some overhead but is it that much?
  178. 18.5G allocated to VMs, 1.5GB available leaves you ~12GB missing, unless there is some VM's with more allocated. I think this is a bit high for general overhead, so it is probably worth finding where it went. On another note, given you noted most of them barely use the allocated resources, this might be a good candidate for using ksm.
  179. https://www.kernel.org/doc/Documentation/vm/ksm.txt
  180. Afterburst - The best unmetered VPS
  181. And now we know why your Swap usage was so high.
  182. This is where the Delicate Crystal Goblet of Theory meets the Raging Doom Hammer of Practice.
  183. If you overcommit resources, you will run out of resources. There is no "incase". It is guaranteed to happen. This means you will eventually start to see more and more swap usage and consequently severely degraded performance as a result.
  184. So you are telling me that most VPS providers will only put max 32 1GB VPS on a server with 32GB of ram? Hardly seems worth the effort if that is the case.
  185. So you are telling me that most VPS providers will only put max 32 1GB VPS on a server with 32GB of ram? Hardly seems worth the effort if that is the case.
  186. With good memory management and burst limiting so that the host can limit resource starvation, it can be done.
  187. You appear to be trusting more to luck to avoid running out of RAM tho and that's not going to work. If you provision all your virts to be able to run the box out of RAM then I absolutely guarantee you that the box will run out of RAM.
  188. Daddy? How was vi born?
  189. Well son, first cat and echo fell in love...
  190. I would not say I am trusting to luck, I know what every VM on the server is using on average as well as what to expect for spikes. Maybe walking the edge a little but since the updates it has been much better. Guess we will see in a few more days

comments powered by Disqus