As a C++ beginner coming from Java, I have become increasingly confused on the topic of memory management and how to avoid memory leaks. Is the code below risking a memory leak that I’m not currently aware of? Any help or constructive feedback would be greatly appreciated.
#pragma once template <class T> class DynamicArray { private: T *m_arr; int m_length; //amount of elements currently being stored in the array int m_capacity; //actual size of the array public: DynamicArray(); ~DynamicArray(); T get(int index); //O(1) void add(T obj); //no need to push any objects forward, O(1) void insert(int index, T obj); //pushes forward all objects in front of the given index, then sets the obj at the given index, O(n) void set(int index, T obj); //sets the given index of m_arr as obj, O(1) void remove(int index); //removes the object at the given index and pushes all the array contents back, O(n) int size(); //O(1) void print(); };
#include <iostream> #include "Header.h" template<class T> DynamicArray<T>::DynamicArray() : m_arr(new T[1]), m_length(0), m_capacity(1) {} template<class T> DynamicArray<T>::~DynamicArray() { delete[] m_arr; } template<class T> T DynamicArray<T>::get(int index) { if (index < m_length && index >= 0) return m_arr[index]; else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::set(int index, T obj) { if (index < m_length && index >= 0) { m_arr[index] = obj; } else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::add(T obj) { if (m_length == m_capacity) { T *new_arr = new T[m_length * 2]; for (int i = 0; i < m_length; i++) { new_arr[i] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; } m_arr[m_length] = obj; m_length++; } template<class T> void DynamicArray<T>::insert(int index, T obj) { if (index < m_length && index >= 0) { int size; if (m_length == m_capacity) size = m_length * 2; else size = m_capacity; T *new_arr = new T[size]; for (int i = 0, j = 0; i < m_length; i++, j++) { if (i == index) { new_arr[j] = obj; j++; } new_arr[j] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; m_length++; } else throw ("Index out of bounds!"); } template<class T> void DynamicArray<T>::remove(int index) { if (index < m_length && index >= 0) { T *new_arr = new T[m_capacity]; for (int i = 0, j = 0; i < m_length; i++, j++) { if (i == index) i++; if(i < m_length) new_arr[j] = m_arr[i]; } delete[] m_arr; m_arr = new_arr; m_capacity = m_capacity * 2; m_length--; } else throw ("Index out of bounds!"); } template<class T> int DynamicArray<T>::size() { return m_length; } template<class T> void DynamicArray<T>::print() { std::cout << m_arr[0]; for (int i = 1; i < m_length; i++) { std::cout << ", " << m_arr[i]; } }
Answer
Welcome to C++, and welcome to Code Review.
C++ memory management is, as you probably have realized, tough and
error-prone. There are many things that can easily go wrong.
Assuming that no exception is thrown, I don’t see obvious memory leaks
in your code; however, there are still some issues worth discussing.
You can take a look at my implementation of a non-resizable dynamic
array or a stack-based full-fledged vector for some
inspiration.
Special member functions
You have not defined copy constructors or move constructors, so the
compiler will synthesize corresponding constructors that simply copy
all the members — which is completely wrong, as now the two
dynamic arrays will point to the same memory. Not only are the
elements shared between the copies, causing modifications to one array
to affect the other, but the two copies will attempt to free the same
memory upon destruction, leading to a double-free error, which is
way more serious than a memory leak.
Initialization semantics
It is generally expected that the constructor of the element type is
called n times if n elements are pushed into the dynamic
array. In your code, however, this is not the case, whre the amount
of constructors called is determined by the capacity of the dynamic
array. Elements are first default initialized, and then copy-assigned
to.
The correct way to solve this problem requires allocating an
uninitialized buffer, and using placement new (or equivalent features)
to construct the elements, which is another can of worms.
Exception safety
Think of what happens when the construction of an element throws an
exception — your code will halt halfway, and there will be a
memory leak. Resolving this problem would require a manual try
block, or standard library facilities like
std::uninitialized_copy
(which essentially do the same under
the hood) if you switched to uninitialized buffers and manual
construction.
Move semantics
All of the elements are copied every time, which is wasteful. Make
good use of move semantics when appropriate.
Miscellaneous
Used std::size_t
instead of int
to store sizes and indexes.1
get
, size
, and print
should be const
. Moreover, get
should
return a const T&
. In fact, get
and set
would idiomatically be
replaced by operator[]
.
Don’t throw a const char*
. Use a dedicated exception class like
std::out_of_range
instead.
Manual loops like
for (int i = 0; i < m_length; i++) {
new_arr[i] = m_arr[i];
}
are better replaced with calls to std::copy
(or
std::move
).
Re-allocating every time insert
is called doesn’t seem like a good
idea. A better trade-off might be to append an element and then
std::rotate
it to the correct position (assuming rotation
doesn’t throw).
Also, print
might take an std::ostream&
(or perhaps
std::basic_ostream<Char, Traits>&
) argument for extra flexibility.
1 As Andreas H. pointed out in the comments, this recommendation is subject to debate, since the use of unsigned arithmetic has its pitfalls. An alternative is to use std::ptrdiff_t
and std::ssize
(C++20) instead. You can write your own version of ssize
as shown on the cppreference page if C++20 is not accessible.
Attribution
Source : Link , Question Author : ethan warco , Answer Author : L. F.