Skip to main content

LXC and Host Crashes

 We had set up a bunch of lxc containers on two servers each with 16 core CPUs and 64 GB RAM(for reliability and loadbalancing). Both the servers are on same vlan. The servers need to have atleast one of their network interface in promiscuous mode so that it forwards all packets on vlan to the bridge(http://blogs.eskratch.com/2012/10/create-your-own-vms-i.html) which takes care of the routing to containers. If the packets are not addressed to the containers, the bridge drops the packet.

Having this setup, we moved all our platform maintenance services to these containers. They are fault tolerant as we used two host machines where each host machine has a replica of the containers on the other. The probability to crash for both the servers at the same time due to some hardware/software failure is less. But to my surprise both the servers are crashing exactly the same time with a mean life time 20 days. We had to wake up late nights(early mornings) to fix stuffs that gone down

The detective work started with latin kernel dumps. They proved futile. Our Hardware engineers upgraded the BIOS. On the Software side, I applied patches such that no memory leak or any sort of exceptions occur due to the applications running. But they were of no use and the crash continued.

I came across this blog http://codeascraft.etsy.com/2012/03/30/kernel-debugging-101/ which gave some insight. There is some stuff called NAPI (New API) introduced in network interfaces. Usually when data reaches physical interface, interrupt is sent upwards the network stack. NAPI kicks off when huge requests hit the interface, now the upper layers have to poll periodically for packets instead of interrupts. Since both the boxes are in promiscuous mode, they will switch to NAPI mode together when the traffic on VLAN is high.

This NAPI needs some data structure to be initialized before kicking off or else there will be kernel panic. The interface driver takes the responsibility of creating the datastructure. The mod tg3 driver (3.119) is installed on our boxes which supposedly have this bug. So upgraded the tg3 driver to 3.122 which claims this bug is fixed.

Today its a month since the last crash had happened(>mean lifetime). But still sleeping with one eye open

Comments

Popular posts from this blog

Lessons from Memory

Started debugging an issue where Linux started calling OOM reaper despite tons of memory is used as Linux cached pages. My assumption was if there is a memory pressure, cache should shrink and leave way for the application to use. This is the documented and expected behavior. OOM reaper is called when few number of times page allocation has failed consequently. If for example mysql wants to grow its buffer and it asks for a page allocation and if the page allocation fails repeatedly, kernel invokes oom reaper. OOM reaper won't move out pages, it sleeps for some time and sees if kswapd or a program has freed up caches/application pages. If not it will start doing the dirty job of killing applications and freeing up memory. In our mysql setup, mysql is the application using most of the Used Memory, so no other application can free up memory for mysql to use. Cached pages are stored as 2 lists in Linux kernel viz active and inactive.
More details here
https://www.kernel.org/doc/gorman…

Walking down the Memory Lane!!!

This post is going to be an account of  few trouble-shootings I did recently to combat various I/O sluggishness.
Slow system during problems with backup
We have a NFS mount where we push backups of our database daily. Due to some update to the NFS infra, we started seeing throughput of NFS server drastically affected. During this time we saw general sluggishness in the system during backups. Even ssh logins appeared slower. Some boxes had to be rebooted due to this sluggishness as they were too slow to operate on them. First question we wanted to answer, does NFS keep writing if the server is slow? The slow server applied back pressure by sending small advertised window(TCP) to clients. So clients can't push huge writes if server is affected. Client writes to its page cache. The data from page cache is pushed to server when there is a memory pressure or file close is called. If server is slow, client can easily reach upto dirty_background_ratio set for page cache in sysctl. This di…

How we have systematically improved the roads our packets travel to help data imports and exports flourish

This blog post is an account of how we have toiled over the years to improve the throughput of our interDC tunnels. I joined this company around 2012. We were scaling aggressively then. We quickly expanded to 4 DCs with a mixture of AWS and colocation. Our primary DC is connected to all these new DCs via IPSEC tunnels established from SRX. The SRX model we had, had an IPSEC throughput of 350Mbps. Around December 2015 we saturated the SRX. Buying SRX was an option on the table. Buying one with 2Gbps throughput would have cut the story short. The tech team didn't see it happening.

I don't have an answer to the question, "Is it worth spending time in solving a problem if a solution is already available out of box?" This project helped us in improving our critical thinking and in experiencing the theoretical network fundamentals on live traffic, but also caused us quite a bit of fatigue due to management overhead. Cutting short the philosophy, lets jump to the story.

De…