Skip to main content


Ptrace is a nice setup ( some people call dirty setup) on linux to debug running processes. This ptrace in sys/ptrace.h is used by strace and gdb. To trace a child process, the child process should call PTRACE_TRACEME. The kernel during each system call(or execution of each instruction) checks if the process is traced. If it is traced, it issues a SIGTRAP, the parent process if in wait() state, will get a signal. The parent issues a SIGSTOP to hold current state of child and can access the registers and memory of child using PEEKDATA and alter the values in register and memory using POKEDATA. Once the required job is done, parent will allow the child to run with a SIGCONT signal. Since one can access registers, the next instruction to be executed can be easily found using instruction pointer, this comes in handy when we need to set breakpoints while debugging. The entire code base can also be changed using ptrace.

PTRACE_ATTACH attaches a running process. It does some hack to become a temporary parent of the process(though PPID of the process points to the original parent). This helps us to run strace on any process with just the pid.
A comprehensive tutorial on ptrace is availabe at,0,0

Ptrace will cause huge performance degradation as it causes the child to make a lot of context switching(due to SIGSTOP signal)
Since Ubuntu 10.10, some restrictions are put on ptrace_attach where a non privileged user cant attach a process even if it is  running with the same uid as his. The file /etc/sysctl.d/10-ptrace.conf(the file  is self explanatory) has to edited appropriately if PTRACE_ATTACH is to be executed by non privileged users.


Popular posts from this blog

Lessons from Memory

Started debugging an issue where Linux started calling OOM reaper despite tons of memory is used as Linux cached pages. My assumption was if there is a memory pressure, cache should shrink and leave way for the application to use. This is the documented and expected behavior. OOM reaper is called when few number of times page allocation has failed consequently. If for example mysql wants to grow its buffer and it asks for a page allocation and if the page allocation fails repeatedly, kernel invokes oom reaper. OOM reaper won't move out pages, it sleeps for some time and sees if kswapd or a program has freed up caches/application pages. If not it will start doing the dirty job of killing applications and freeing up memory. In our mysql setup, mysql is the application using most of the Used Memory, so no other application can free up memory for mysql to use. Cached pages are stored as 2 lists in Linux kernel viz active and inactive.
More details here…

Walking down the Memory Lane!!!

This post is going to be an account of  few trouble-shootings I did recently to combat various I/O sluggishness.
Slow system during problems with backup
We have a NFS mount where we push backups of our database daily. Due to some update to the NFS infra, we started seeing throughput of NFS server drastically affected. During this time we saw general sluggishness in the system during backups. Even ssh logins appeared slower. Some boxes had to be rebooted due to this sluggishness as they were too slow to operate on them. First question we wanted to answer, does NFS keep writing if the server is slow? The slow server applied back pressure by sending small advertised window(TCP) to clients. So clients can't push huge writes if server is affected. Client writes to its page cache. The data from page cache is pushed to server when there is a memory pressure or file close is called. If server is slow, client can easily reach upto dirty_background_ratio set for page cache in sysctl. This di…

The server, me and the conversation

We were moving a project from AWS to our co-located DC. We have setup KVMs scheduled by Cloudstack for each of the component in the architecture. The KVMs used local storage. The VMs are provisioned with more than required resources because we have the opinion that in our DC scaling during peak load and then downscaling doesn't offer much benefits financially as we are anyways paying for the hardware in advance and its also powered on. Its going to be idle if not used. Now we found something interesting our latency in co-located DC was 2 times more than in AWS. The time for first byte at our load balancer in aws was 60ms average and at our DC was 112ms. We started our debugging mission, Mission Conquer-AWS. All the servers are newer Dell hardwares. So the initially intuition was virtualisation is causing the issue.

Conversation with the Hypervisor We started with CPU optimisation, we started using the host-passthrough mode of CPU in libvirt so VMs dont see QEMU emulated CPUs, the…