This Article Is Based On Github Repo 'bloomberg/memray'. All Credit For This Research Goes To The Researchers Of This tool 👏👏👏 ✍ Submit AI Related News/Story/PR Here Please Don't Forget To Join Our ML Subreddit
All developers will agree that memory is an essential resource in computing. Due to its importance, it is necessary for us to know where it is consumed and how to optimize its use. However, if we try to adjust the program blindly without any investigation, it can still slow down our application and this is where memory profiling comes to the rescue. Memory profiling allows us to understand our application’s memory allocation, which helps us detect memory leaks or determine which parts of the program are consuming the most memory.
Memray is a memory profiler developed at Bloomberg. It is also now open source and can track memory allocation in Python code, whether native extensions or the interpreter itself. Unlike sampling profilers such as py-spy, Memray has features that set it apart; Some of them are:
- It can trace each function call to accurately represent the call stack, unlike sampling profilers
- It can also handle native calls in C/C++ libraries so that the entire call stack is presented in the results.
- It does not slow down the application when profiling interpreted code
- Native code profiling tends to be slower but must be explicitly enabled.
- It can generate various reports on memory usage data and display them as flame graphs
- Works with Python and native threads like C++ threads in C extension
Memray can help with memory allocations in applications to help discover the cause of high memory usage, find memory leaks, and find hot spots in code that are causing high allocations. It can generate memory usage reports in the form of flame graphs that help identify program bottlenecks. As of now, Memray only works on Linux and cannot be installed on any platforms.
You can download the latest stable version of PyPI using the following pip command.
python3 -m pip install memray
You can use Memray in several ways. The easiest way is to use the CLI to run your application.
python3 -m memray run my_script.py
This generates a .bin file which can be parsed as flame graphs using the memray flamegraph command.
memray flamegraph my_script.2369.bin
This generates an HTML file with the flame graphic that you can inspect with your web browser. Here is an example.
The best part is that this memory profiler is open source, so you can contribute to it or modify it according to your needs. Memory profiling is very essential for proper resource usage. This allows us to identify some of the basic bottlenecks in the code. Resolving these bottlenecks can lead to significant code optimization and ultimately reduced cost to the business.