As of December 2022, Opsian's commercial services are no longer available. However, we have left some of our blog content online in case it is helpful to anyone.


All posts (1 / 3)

Profiling can be about time.. but which time? - Profiling In Production Series #2

The last post in our Profiling in Production series we covered instrumentation and sampling profilers. We talked about “resources of interest” but which ones are the most useful for diagnosing performance bottlenecks? The most commonly used resource is time. We’ve been deliberately vague about the concept of timing so far because when it comes to computers there’s actually two types of timing..

What is profiling and how does it work? - Profiling In Production Series #1

In this blog post we’re going to look at performance profiling. What it is and how it works. This is the first post in our series on Profiling in Production, by the end of the series you’ll have an understanding of how profilers work, which ones are safe and efficient in production, how to read and interpret the data they produce and finally, how to use this knowledge to keep on top of performance.

How TransferWise fixed Elusive Performance Problems with Opsian

When TransferWise first started using Opsian’s Continuous Profiling platform they had a service with a peculiar and elusive performance problem, one that had stumped existing performance monitoring tools. This post covers the symptoms of the mysterious problem and how TransferWise used Opsian’s tools to track down and solve it.

Announcing Continuous Allocation profiling for the Java Virtual Machine

If you’ve ever struggled with optimising Java’s Garbage Collection performance in production then our newest feature, Continuous Allocation Profiling, is the tool for you. It breaks down allocations by type and by line of allocating code, live from your production environment to give you all the information you need to optimise the parts of your applications with excessive allocation behaviour.

Why Java's TLABs are so important and why write contention is a performance killer in multicore environments

The JVM’s garbage collectors make use of Thread-Local Allocation Buffers (TLABs) to improve allocation performance. In this article we’re going to understand what TLABs are, how they affect the code generated by the JIT for allocation and what the resulting effect on performance is. Along the way we’ll learn about the organisation of CPU caches and how multiple CPU cores keep their contents coherent and how you can use this knowledge to avoid multithreaded scalability bottlenecks.

Continuous Profiling of a JVM application in Kubernetes

The use of Docker and Kubernetes has been growing in the JVM community over the past couple of years. One downside to Kubernetes from a performance optimisation perspective is that it can be harder for developers to get profiling data from their production environment. This blog post aims to show you how to make that easier by continuously profiling using Docker and Kubernetes.

The JVM's mysterious AllocatePrefetch options: what do they actually do?

The HotSpot JVM comes with a range of non-standard -XX: options, many of which have an impact on performance. One set are the family of so-called AllocatePrefetch options comprising: -XX:AllocatePrefetchStyle, -XX:AllocatePrefetchStepSize, -XX:AllocatePrefetchLines, -XX:AllocatePrefetchInstr, -XX:AllocatePrefetchDistance and -XX:AllocateInstancePrefetchLines. In this blog post you’ll learn the background behind why AllocatePrefetch is necessary and how it can help performance.

Performance Testing Spring Boot with Gatling

Performance testing is hard and there is no one established practise for doing it. In this article you will see a simple Spring Boot Rest endpoint that exhibits a performance issue, how to load test it and how to use profiling to identify and resolve the bottleneck.

Bringing Opsian's Continuous Profiling to GraalVM

We’re pleased to announce that Opsian now supports Continuous Profiling on the GraalVM JVM - this means that GraalVM users now have access to the same always-on low-overhead performance data as users of OpenJDK-based builds.

Page 1 of 3