<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Performance on Akshay Deshpande</title>
    <link>https://akshayd-dev.pages.dev/categories/performance/</link>
    <description>Recent content in Performance on Akshay Deshpande</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 08 Feb 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://akshayd-dev.pages.dev/categories/performance/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Memory management : Java containers on K8s</title>
      <link>https://akshayd-dev.pages.dev/posts/memory-management-java-containers-on-k8s/</link>
      <pubDate>Sat, 08 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/memory-management-java-containers-on-k8s/</guid>
      <description>&lt;p&gt;This page documents a few aspects of memory management on Java containers on K8s clusters.&lt;/p&gt;
&lt;p&gt;For java containers, memory management on K8s have various factors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Xmx and Xms limits managed by java&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Request/limit values for the container&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HPA policies used for scaling the number of pods&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Misconfigurations / misunderstanding of any of these parameters leads to OOMs of java containers on K8s clusters.&lt;/p&gt;
&lt;h3 id=&#34;memory-management-on-java-containers&#34;&gt;Memory management on java containers:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-XX:+UseContainerSupport&lt;/code&gt; is enabled by default form java 10+&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Kubernetes]: CPU and Memory Request/Limits for Pods</title>
      <link>https://akshayd-dev.pages.dev/posts/kubernetes-cpu-and-memory-request-limits-for-pods/</link>
      <pubDate>Sun, 14 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/kubernetes-cpu-and-memory-request-limits-for-pods/</guid>
      <description>&lt;p&gt;In this write up, we will try and explore how to make the most out of the resources in K8s cluster for the Pods on them.&lt;/p&gt;
&lt;h2 id=&#34;resource-types&#34;&gt;Resource Types:&lt;/h2&gt;
&lt;p&gt;When it comes to resources on Kubernetes cluster, they can be fairly divided in to two categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;compressible&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If the usage of this resource for an application goes beyond the max, it can be throttled without directly killing the application/process.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;example : cpu - if a container consumes too much of compressible resource, they are throttled&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Memory-metrics]: Linux /proc interface</title>
      <link>https://akshayd-dev.pages.dev/posts/memory-metrics-linux-proc-interface/</link>
      <pubDate>Mon, 22 Aug 2022 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/memory-metrics-linux-proc-interface/</guid>
      <description>&lt;p&gt;This writeup is more of a demo to showcase the power of &amp;ldquo;&lt;code&gt;proc&lt;/code&gt;&amp;rdquo; (process information pseudo-filesystem) interface in linux to get the memory details of process, and also a quick brief on the power of &amp;ldquo;proc interface&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;In the current trend of building &lt;em&gt;abstraction over abstractions&lt;/em&gt; in software/tooling, very few tend to care about the source of truth of a metrics. There are various APM / Monitoring tools to get the memory details of a process for a linux system, but when the need arises, I believe, one must know the ways of going closer to the source of truth on a linux system and verify things.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Understanding CPU Time</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-understanding-cpu-time/</link>
      <pubDate>Sat, 24 Jul 2021 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-understanding-cpu-time/</guid>
      <description>&lt;p&gt;As a Performance Engineer, time and again you will come across a situation where you want to profile CPU of a system. The reasons might be many; like, CPU usage being high, you want to trace a method to see its CPU cost or you suspect CPU times for a slow transaction.&lt;/p&gt;
&lt;p&gt;You might use one of the various profilers out there to do this. (I use &lt;a href=&#34;https://www.yourkit.com/docs/kb/&#34;&gt;yourkit&lt;/a&gt; and &lt;a href=&#34;https://www.linkedin.com/pulse/jprofiler-cpu-profiling-akshay-deshpande/&#34;&gt;Jprofiler&lt;/a&gt;). All these profilers report the CPU costs in terms of CPU Time, when you profile the CPU. &lt;em&gt;This time is not the equivalent of your watch time.&lt;/em&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Elastic Search Best practices</title>
      <link>https://akshayd-dev.pages.dev/posts/elastic-search-best-practices/</link>
      <pubDate>Mon, 14 Jun 2021 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/elastic-search-best-practices/</guid>
      <description>&lt;p&gt;These are the self-notes from managing 100+ node ES cluster, reading through various resources and a lot of production incidents due to unhealthy ES.&lt;/p&gt;
&lt;h2 id=&#34;memory&#34;&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Always choose ES_HEAP_SIZE 50% of the total available memory. Sorting and aggregations both can be memory hungry, so enough heap space to accommodate these is required. This property is set inside the /etc/init.d/elasticsearch file.&lt;/li&gt;
&lt;li&gt;A machine with 64 GB of RAM is ideal; however, 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductive (you end up needing smaller machines), and greater than 64 GB has problems in pointer compression.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;cpu&#34;&gt;&lt;strong&gt;CPU&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Choose a modern processor with multiple cores. If you need to choose between faster CPUs or more cores, choose more cores. The extra concurrency that multiple cores offer will far outweigh a slightly faster clock speed. The number of threads is dependent on the number of cores. The more cores you have, the more threads you get for indexing, searching, merging, bulk, or other operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;disks&#34;&gt;&lt;strong&gt;Disks&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;If you can afford SSDs, they are far superior to any spinning media. SSD-backed nodes see boosts in both querying and indexing performance.&lt;/li&gt;
&lt;li&gt;Avoid network-attached storage (NAS) to store data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;network&#34;&gt;&lt;strong&gt;Network&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;The faster the network you have, the more performance you will get in a distributed system. Low latency helps to ensure that nodes communicate easily, while a high bandwidth helps in shard movement and recovery.&lt;/li&gt;
&lt;li&gt;Avoid clusters that span multiple data centers even if the data centers are collocated in close proximity. Definitely avoid clusters that span large geographic distances.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;general-consideration&#34;&gt;&lt;strong&gt;General consideration&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;It is better to prefer medium-to-large boxes. Avoid small machines because you don&amp;rsquo;t want to manage a cluster with a thousand nodes, and the overhead of simply running Elasticsearch is more apparent on such small boxes.&lt;/li&gt;
&lt;li&gt;Always use a Java version greater than JDK1.7 Update 55 from Oracle and avoid using Open JDK.&lt;/li&gt;
&lt;li&gt;A master node does not require much resources. In a cluster with 2 Terabytes of data having 100s of indexes, 2 GB of RAM, 1 Core CPU, and 10 GB of disk space is good enough for the master nodes. In the same scenario, the client nodes with 8 GB of RAM each and 2 Core CPUs is a very good configuration to handle millions of requests. The configuration of data nodes is completely dependent on the speed of indexing, the type of queries, and aggregations. However, they usually need very high configurations such as 64 GB of RAM and 8 Core CPUs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Some other important configuration changes&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : What does CPU% usage tell us ?</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-what-does-cpu-usage-tell-us/</link>
      <pubDate>Fri, 28 May 2021 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-what-does-cpu-usage-tell-us/</guid>
      <description>&lt;p&gt;When you come across a system which is misbehaving, majority of the times the first metrics that we look at is CPU usage. But do we really understand what CPU usage of a system tells us ? In this article let us try and understand what X % usage of a system really means.&lt;/p&gt;
&lt;p&gt;One of the easy ways to check on CPU is &amp;ldquo;top&amp;rdquo; command.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://akshayd-dev.pages.dev/posts/performance-what-does-cpu-usage-tell-us/images/image-3.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;The &amp;ldquo;%Cpu(s)&amp;rdquo; metrics seen above is a combination of different components.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Using iperf3 tool for Network throughput test</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-using-iperf3-tool-for-network-throughput-test/</link>
      <pubDate>Sun, 31 Jan 2021 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-using-iperf3-tool-for-network-throughput-test/</guid>
      <description>&lt;p&gt;In this world of Microservices and the distributed systems, a single request (generally) hops through multiple servers before being served. More often than not, these hops are also across the Network cards making the Network Performance the source of slowness in the application.&lt;br&gt;
These parameters makes the need to measure Network performance between servers/systems more critical for benchmarking or debugging.&lt;/p&gt;
&lt;p&gt;Iperf3 is one of the open source tools which can be used for network throughput measurement. Below are some of its features.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Java Thread Dumps - Part2</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-java-thread-dumps-part2/</link>
      <pubDate>Wed, 30 Dec 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-java-thread-dumps-part2/</guid>
      <description>&lt;p&gt;In the previous article about Java Thread Dumps (link &lt;a href=&#34;https://performanceengineeringin.wordpress.com/2020/10/22/performance-java-thread-dumps-part1/&#34;&gt;here&lt;/a&gt;) we looked in to a few basics on Thread dumps(When to take?, How to take?, Sneak peaks? etc.)&lt;/p&gt;
&lt;p&gt;In this write up, I wanted to mention a few tools which can ease the process of collecting and analyzing thread dumps.&lt;/p&gt;
&lt;h2 id=&#34;collecting-multiple-thread-dumps&#34;&gt;Collecting multiple thread dumps:&lt;/h2&gt;
&lt;p&gt;I prefer command-line over any APM tools for taking thread dumps. The best way for analyzing threads is to collect a few thread dumps (5 to 10) and look through the transition in the state of threads.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Thinking in-terms of Performance</title>
      <link>https://akshayd-dev.pages.dev/posts/thinking-in-terms-of-performance/</link>
      <pubDate>Fri, 20 Nov 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/thinking-in-terms-of-performance/</guid>
      <description>&lt;p&gt;A few short thoughts / ideas wrt of Performance centric product.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In this world of infinite scaling of computes, pay close attention to common choke points. Like DB, storage(s) etc, which are shared by all the computes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Majority of the reads and writes have to happen in Bulk operations and NOT as single read/writes. Specially when there are 100&amp;rsquo;s-1000&amp;rsquo;s of reads/writes/deletes on storage(s).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Threads&lt;/em&gt;. Pay close attention to which part of the entire flow is multi-threaded. Sometimes, only a small part of the flow is multi-threaded, but entire application is called multi-threaded, which is wrong.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Java Thread Dumps - Part1</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-java-thread-dumps-part1/</link>
      <pubDate>Thu, 22 Oct 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-java-thread-dumps-part1/</guid>
      <description>&lt;p&gt;This is first of a two parts article which talks about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are thread dumps?&lt;/li&gt;
&lt;li&gt;When to take thread dumps ?&lt;/li&gt;
&lt;li&gt;How to take thread dumps ?&lt;/li&gt;
&lt;li&gt;What is inside a thread dumps ?&lt;/li&gt;
&lt;li&gt;What to look for in a thread dump?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Majority of the systems today are mutlicore and hyper-threaded. Threading at the software level allows us to take advantage of a system&amp;rsquo;s mutlicores to achieve the desired pace and efficiency of the application operations. Along with pace and efficiency, multi-threading brings its own set of problems w.r.t thread contentions, thread racing, high CPU usage etc. In this write up we will see how to debug these problems by taking thread dumps on java applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Flame Graphs</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-flame-graphs/</link>
      <pubDate>Sun, 26 Apr 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-flame-graphs/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://performanceengineeringin.wordpress.com/2020/03/15/performance-profiling-with-linux-perf-command-line-tool/&#34;&gt;previous article&lt;/a&gt; we explored the basic capabilities of linux Perf_tool.&lt;br&gt;
In this write-up I am trying to extend these capabilities and show how to generate and read &lt;strong&gt;&lt;em&gt;Flame Graphs&lt;/em&gt;&lt;/strong&gt; for analyzing the profiles generated with Perf_tool.&lt;/p&gt;
&lt;h2 id=&#34;how-to-generate-flame-graphs-&#34;&gt;How to generate Flame Graphs ?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;To start with, we will need perf_tools linux profiler to capture the profile first. Follow the steps under &amp;ldquo;&lt;em&gt;How to setup perf tool?&lt;/em&gt;&amp;rdquo; in the &lt;a href=&#34;https://performanceengineeringin.wordpress.com/2020/03/15/performance-profiling-with-linux-perf-command-line-tool/&#34;&gt;previous article&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Now if you collect a profile of the CPU using &lt;em&gt;perf_tool&lt;/em&gt; setup in the above step, there is a possibility that you might see a lot of &lt;em&gt;symbol link values&lt;/em&gt; in the place of Function names.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://akshayd-dev.pages.dev/posts/performance-flame-graphs/images/image.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Profiling with linux Perf command-line tool</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-profiling-with-linux-perf-command-line-tool/</link>
      <pubDate>Sun, 15 Mar 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-profiling-with-linux-perf-command-line-tool/</guid>
      <description>&lt;p&gt;Most of the Performance Engineers use some sort of profiling tools like Yourkit, Jprofiler or some APM tools like Newrelic, Datadog, Appdynmics etc. Although these tools are easy to use out of the box and help with Observability, they don&amp;rsquo;t give a complete picture of a Performance problem at occasions.&lt;br&gt;
This is where &lt;em&gt;perf&lt;/em&gt; Linux profiler comes in handy.&lt;/p&gt;
&lt;p&gt;This write up is an attempt to explain :&lt;br&gt;
- What is &lt;em&gt;perf&lt;/em&gt; Linux profiler ?&lt;br&gt;
- How to set it up ?&lt;br&gt;
- What are its capabilities ?&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance] : Java&#39;s built in diagnostic tool - Jstat</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-javas-built-in-diagnostic-tool-jstat/</link>
      <pubDate>Wed, 22 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-javas-built-in-diagnostic-tool-jstat/</guid>
      <description>&lt;p&gt;When it comes to Performance Monitoring and analysis, we tend to think of full fledged license tools like Dynatrace, Newrelic, Appdynamics, Yourkit etc. However, if it is a java application which is under diagnosis, java&amp;rsquo;s built in tools are a good place to start.&lt;/p&gt;
&lt;p&gt;Java comes with a set of built-in diagnostic tools like - Jconsole, jcmd, jstat, jmap, jstack, jvisualvm, jfr and many more. Each of them help in tackling a kind of problem. For the scope of this article, lets look in to how jstat is useful as diagnostic tools.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Monitoring] - Paging and Swapping in Memory</title>
      <link>https://akshayd-dev.pages.dev/posts/monitoring-paging-and-swapping-in-memory/</link>
      <pubDate>Mon, 09 Dec 2019 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/monitoring-paging-and-swapping-in-memory/</guid>
      <description>&lt;p&gt;As a continuation after understanding Virtual Memory in the &lt;a href=&#34;https://performanceengineeringin.wordpress.com/2019/11/04/understanding-virtual-memory/&#34;&gt;previous article&lt;/a&gt;, this article tries to explain theways to monitor the same.&lt;/p&gt;
&lt;p&gt;Memory can be looked at with two perspectives: &lt;em&gt;Utilization&lt;/em&gt; and &lt;em&gt;Saturation.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Utilization&lt;/em&gt; tells the memory usage. Checking free/used memory reflects the Utilization.&lt;br&gt;
&lt;em&gt;Saturation&lt;/em&gt; tells if the memory is used at its full capacity and how the system is using &lt;strong&gt;Virtual memory&lt;/strong&gt; to deal with memory crunch.&lt;br&gt;
In other words, if demands for memory exceed the amount of main memory, main memory becomes &lt;em&gt;saturated&lt;/em&gt;. The operating system may then free memory by employing &lt;em&gt;paging&lt;/em&gt;, &lt;em&gt;swapping&lt;/em&gt;, and, on Linux, the OOM killer. Any of these activities is an indicator of main memory saturation.&lt;br&gt;
Also, it is important to understand that Paging and Swapping are two different things. More details about the same in the &lt;a href=&#34;https://performanceengineeringin.wordpress.com/2019/11/04/understanding-virtual-memory/&#34;&gt;previous article&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Understanding] : Virtual Memory</title>
      <link>https://akshayd-dev.pages.dev/posts/understanding-virtual-memory/</link>
      <pubDate>Mon, 04 Nov 2019 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/understanding-virtual-memory/</guid>
      <description>&lt;p&gt;As a Performance Engineer you will come across Virtual memory very often specially when monitoring or debugging Memory issues. Virtual memory along with below mentioned terminologies are used very loosely across the industry.&lt;br&gt;
- Page, Page frame, Page fault, Minor/Major fault, Paging, Swapping etc.&lt;/p&gt;
&lt;p&gt;This article is an attempt to understand Virtual memory in detail theoretically, in the context of Computer Architecture.&lt;/p&gt;
&lt;h3 id=&#34;what-is-virtual-memory&#34;&gt;&lt;strong&gt;What is virtual memory?&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Virtual memory is not the real memory. It is an abstraction layer provided to each process. It is meant to simplify the software development, leaving the physical memory placement to operating system.&lt;br&gt;
To put in very simple terms, the purpose of Virtual memory is &lt;em&gt;to use the hard disk as an extension of RAM&lt;/em&gt;, thus increasing the available address space a process can use. Using Virtual memory, system can address more memory than it actually has, and it uses the hard drive to hold the excess. This area on the hard drive is called a &lt;strong&gt;&lt;em&gt;page file&lt;/em&gt;&lt;/strong&gt;, because it holds chunks of main memory on the hard drive.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Understanding] : How to read a G1GC log file.</title>
      <link>https://akshayd-dev.pages.dev/posts/understanding-how-to-read-a-g1gc-log-file/</link>
      <pubDate>Fri, 11 Oct 2019 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/understanding-how-to-read-a-g1gc-log-file/</guid>
      <description>&lt;p&gt;As a &lt;em&gt;Performance Engineer&lt;/em&gt;, time and again you will need to look in to GC logs to see how jvm is handling garbage collection.&lt;br&gt;
With G1GC being the default gc for java versions 9 &amp;amp; above, one needs to know what the G1GC log actually reads like.&lt;/p&gt;
&lt;p&gt;To get an understanding of G1GC, here is an in-depth material on it from Oracle tutorials. - &lt;a href=&#34;https://www.oracle.com/technetwork/tutorials/tutorials-1876574.html#FreeCSet&#34;&gt;link&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To begin with, there are 50+ jvm parameters for selecting different GC algorithms and customizing them as per requirement.&lt;br&gt;
Below link has cheat sheet for all the jvm parameters that you can select from. - &lt;em&gt;&lt;a href=&#34;https://raw.githubusercontent.com/aragozin/sketchbook/download/Java%208%20-%20GC%20cheatsheet.pdf&#34;&gt;PDF&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>[Performance Debugging] : Root causing &#34;Too many open files&#34; issue</title>
      <link>https://akshayd-dev.pages.dev/posts/performance-debugging-root-causing-too-many-open-files-issue/</link>
      <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
      <guid>https://akshayd-dev.pages.dev/posts/performance-debugging-root-causing-too-many-open-files-issue/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Operating system&lt;/strong&gt; : Linux&lt;/p&gt;
&lt;p&gt;This is a very straight forward write-up on how to root cause &amp;ldquo;Too many open files&amp;rdquo; error seen during high load Performance Testing.&lt;/p&gt;
&lt;p&gt;This article talks about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The ulimit parameter &amp;ldquo;open files&amp;rdquo;,&lt;/li&gt;
&lt;li&gt;Soft and Hard ulimits&lt;/li&gt;
&lt;li&gt;What happens when the process overflows the upper limit&lt;/li&gt;
&lt;li&gt;How to root cause the source of file reference leak.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h4 id=&#34;scenario-&#34;&gt;Scenario :&lt;/h4&gt;
&lt;p&gt;During a load test, as the load increased, I was seeing failures in transaction with error &amp;ldquo;Too many open files&amp;rdquo;.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
