On the Importance of Understanding Memory Handling
27. 10. 2020
6 min.
Cryptographic researcher and implementer
One concept that can leave developers really scratching their heads is memory, and how programming languages interact with it. The programs and code that we write are constantly dynamically allocating portions of memory at our request, yet we struggle to grasp the true nature of how this happens.
What is memory?
Memory, as it has been defined since the beginning, can be described as the place where certain information is stored for later retrieval, either in a permanent way (until it is manually deleted) or temporarily (when the machine deletes it for you). This stored information is virtually every action we carry out while interacting with the machine. When a program such as an Internet browser is open, for example, it is loaded from your permanent storage (the hard drive) into the volatile storage (the RAM).
Main memory, also known as RAM, is the internal memory that a machine uses, and is different from any kind of external storage, such as USBs or disk drivers. This is the memory that a machine is able to interact with and all programs are loaded into for their execution. Sometimes, a whole program is loaded into the memory; other times, only a certain routine (a process) from it is loaded. This mechanism is referred to as dynamic loading, and if the loaded program is dependent on another program, the mechanism for linking the dependent programs into the main loaded one is called dynamic linking.
As memory is a concept that requires a specific level of handling, proper management of it is vital, as it touches every process managed by the machine. Modern operating systems have complex mechanisms to handle it properly. This is referred to as memory management, the process by which memory is controlled and coordinated on various levels (the hardware, operating system, and programs).
Here, we will focus on how memory is handled at the operating system (OS) and programming levels. At the OS level, memory management involves the allocation of specific blocks, which can be understood as spaces or locations, to individual requests made by programs. At the programming level, memory management involves sending requests for memory space to the OS, and ensuring the availability of adequate memory space for objects and data structures defined by the program (adequate allocation, reallocation, and freeing). When a program requests a block of memory, an “allocator” assigns that block; when it no longer needs the block, it is freed for reassignment. This can be done manually or automatically, depending on the programming language chosen, the access to certain features of that language, and the careful use of the language’s capacities by the programmer.
Manual memory management can be defined as the manual input of programming instructions to allocate and free blocks of memory by the programmer. Famously, the C programming language employs this technique by using a subset of handling memory called dynamic memory allocation. However, the majority of today’s popular languages use automatic memory handling, in the form of garbage collectors or Automatic Reference Counting (ARC), which has been widely popularized by Objective-C and Swift.
What can go wrong?
If memory handling is not chosen correctly, things can go really wrong, as the process of allocating and freeing blocks can get corrupted. Looking at the bigger picture, this might not seem such a terrible scenario, as the normal state of the memory will eventually be restored. But machines run hundreds of processes at the same time and can’t be bothered to wait until a normal state is reached again. Eventually, if memory is not correctly handled, programs will run out of blocks to store the information they need to function correctly. Moreover, if the stored information contains sensitive data, such as passwords, keys, or any private details, attackers can try to steal the data from memory that is incorrectly released or allocated.
Below are the most common problems that can arise from incorrect memory handling:
Arithmetic or integer overflows
These are incorrect arithmetic calculations that result in a number larger than the one the allocated memory was initially defined for. For example, a program might have defined that a number was going to take 8 bits of memory, only allowing numbers from -128 to +127. If the programmer assigns the number 127 to it and later tries to add 1, it will result in an undesired operation, as there is not enough initial memory for the number 128.
The bug was defined by Brumley, Chiueh and Johnson in 2012, who described it as “when a variable value goes out of the range of the machine word used to materialize it.” This can be due to various reasons, such as overflows, underflows, truncations, or signedness errors, mainly because the semantics of integer operations are not correctly defined in the programming language and, therefore, programmers have a hard time understanding them. Several languages have different approaches for dealing with this—for example, Smalltalk and Scheme automatically autopromote integers—but others leave correcting this problem to the programmers themselves.
Memory leaks
When a program requests memory from the OS and never releases it—that is, never instructs the OS that the memory is free to be reused—memory leaks occur and a program will run out of memory to use. This incorrect behavior can also happen when an object has been stored in memory but cannot be accessed by the running code.
Segmentation faults
Segmentation faults happen when a program tries to access memory that it does not have permission to access, as it has already been allocated for other purposes, or when it attempts to access a block of memory in a way that is not allowed, such as trying to write on a read-only location. This results in the program hanging, crashing, or shutting down.
Buffer overflows
These arise when a program writes data to the end of its allocated space and then tries to write over memory that has been allocated for other purposes, or on locations restricted to their writing capabilities. Buffer overflows also result in programs hanging, crashing, or shutting down, and could cause security breaches.
Double delete
This occurs when a program deletes memory that has already been deleted and can result in heap corruption or segmentation faults. It can be considered a subset of the segmentation faults problem.
Manual vs automatic memory handling
The usual question that programmers need to solve when dealing with memory is how to handle it—if memory is available for the OS to allocate to the programs, does the language that is going to be used for the software use a manual or automatic approach? And, more importantly, what will that entail?
Manual memory management refers to the manual actions that programmers have to make in order to handle memory when using a specific language. In contrast, automatic memory management means that the programmer has to take little to no action when dealing with memory. When we talk about “handling” or “dealing with” memory here, we are referring to either allocating or reallocating needed portions, or freeing up memory that is considered “garbage.” Up until the mid-1990s, the majority of languages supported manual memory management, and even today, most languages use this method (in the form of words such as “new” and “alloc”). This happens because object creation, meaning allocating memory for an object, is easy, as the programmer is aware of the size, name, and initialization needs of that object at the time of its creation. However, object destruction is harder, as the programmer might be unaware of the size of the object, since the destruction is written long after the object’s creation. Furthermore, they might be unaware of exactly when the object should be destroyed, as other routines that are dependent on the object may still be running in the software. Failing to correctly initialize or destroy an object can lead to incorrect memory handling, as discussed earlier. How a language manages this memory mishandling depends on its specification: Most of the time, it will result in “undefined behavior”—in other words, behavior that is unpredictable. (Note that precise use of manual memory management is always deterministic, as the programmer will always know when the object was created and when it was freed.)
In 1959, a new idea of handling memory—garbage collection—was introduced to the programming language Lisp. The most well-known example of automatic memory management, this method finds objects that cannot be used in the future and frees them, making them available for reuse. This technique often results in better memory handling, as the number of bugs is reduced. Some of the many strategies to use when implementing garbage collection include tracing, reference counting, and timestamp and heartbeat.
Other ways to achieve automatic memory handling include using stack-based memory allocation, region-based memory management, and ARC. However, these all have some performance problems and create a non-deterministic state, as the programmer is unaware of exactly when an object is released.
Of course, both manual and automatic memory management are still widely used by programming languages today: the former is mostly used by the C family, while the latter is used by Lisp, Java, and many others. In addition, most languages often use a mix of both techniques: As mentioned, many use manual memory for allocating blocks but expect the garbage collector to free them.
Conclusion
As we have seen, what computers give programmers is the ability to feel like they’re “kings of the universe,” thanks to the way we use machines to solve complex problems. However, as we have also noted, this universe is a bounded one, constrained by its limits, enclosed by its restrictions. And one of those limits is the amount of memory available. But as Hamlet says, we, as programmers, “could live in a walnut shell and feel like the king of the universe.”
This article is part of Behind the Code, the media for developers, by developers. Discover more articles and videos by visiting Behind the Code!
Want to contribute? Get published!
Follow us on Twitter to stay tuned!
Illustration by Blok
Viac inšpirácie: Coder stories
We can learn a lot by listening to the tales of those that have already paved a path and by meeting people who are willing to share their thoughts and knowledge about programming and technologies.
Keeping up with Swift's latest evolutions
Daniel Steinberg was our guest for an Ask Me Anything session (AMA) dedicated to the evolutions of the Swift language since Swift 5 was released.
10. 5. 2021
"We like to think of Opstrace as open-source distribution for observability"
Discover the main insights gained from an AMA session with Sébastien Pahl about Opstrace, an open-source distribution for observability.
16. 4. 2021
The One Who Co-created Siri
Co-creator of the voice assistant Siri, Luc Julia discusses how the back end for Siri was built at Apple, and shares his vision of the future of AI.
07. 12. 2020
The Breaking Up of the Global Internet
Only 50 years since its birth, the Internet is undergoing some radical changes.
26. 11. 2020
How The URL Was Built
Let's explore the history and role of each of the components that make up the URL!
07. 10. 2020
Novinky, ktoré to vyriešia
Chcete držať krok s najnovšími článkami? Dvakrát týždenne môžete do svojej poštovej schránky dostávať zaujímavé príbehy, ponuky na práce a ďalšie tipy.
Hľadáte svoju ďal šiu pracovnú príležitosť?
Viac ako 200 000 kandidátov našlo prácu s Welcome to the Jungle
Preskúmať pracovné miesta