Skip to main content

Reimplement all JavaScript Array Functions with while loops only

· 33 min read

Last month, out of curiosity, I reimplemented all 38 JavaScript Array functions (e.g. .forEach(), .map(), .sort(), etc.) with while loops only. As always, I learned something as I work on the all the functions. Some features, edge cases, considerations that I previously do not know about JavaScript.

Other than that, there are some less-known functions that I discovered, at least I have not used them before until now, like .copyWithin(), .splice() and .with().

To be honest, most of the implementation is tedious and repetitive. A lot of effort goes into handling edge cases, which are the interesting things I learned. The implementation is not as interesting compare to random things I learned, so I will focus on sharing the lessons first. But if you really want to, you can skip to the implementation considerations where I explained the setup, how some functions are implemented, especially some interesting one like .flat() and .sort().

The full code is available in this repo.

What I learned

Sparse Arrays

The first thing I discovered, and the major source of pain in this experiment is sparse arrays. This means an array can have empty slots in between. Empty items are different from undefined or null. undefined in an array is not empty items. Empty slots are just empty. You can define sparse arrays like this:

let a = [1, , 3];
console.log(a);
// [1, <empty>, 3]

Or if you set the length of an array that is larger than its original length, the new items are all empty.

let a = [1];
a.length = 2;
console.log(a);
// [1, <empty>]

But if you access an empty item with the square bracket, it will return undefined.

let a = [1, , undefined];
console.log(a[0]); // 1
console.log(a[1]); // undefined
console.log(a[2]); // undefined

So, the only way to know if an index is an empty slot, you use the in operator. If it returns false, then it is empty.

let a = [1, , undefined];
console.log(0 in a); // true
console.log(1 in a); // false
console.log(2 in a); // true

To make an item empty, you cannot reassign "an empty value". You have to delete it from the array.

let a = [1];
console.log(a); // [1]

delete a[1];
console.log(a); // [<empty>]

There are many more tricky things that comes with sparse arrays. For example, most of the functions that take a callback like .forEach(), .filter(), and .reduce() will skip empty items in the array. But despite it skips invoking the callback, functions like .map, .sort() and .reverse() will preserve the empty items in their output.

const out = [1, , 3].map((e) => {
console.log(e);
return e ** e;
});
console.log(out);
// 1
// 3
// [1, <empty>, 27]

Functions like .at() on the other hand treat empty as undefined. And newer functions that creates a copy of the original array, like .toSorted() and .toReversed(), converts empty items to undefined. Yes, it is pretty inconsistent.

console.log([1, , 3].at(1));
// undefined

console.log([1, , 3].toReversed());
// [3, undefined, 1]

Also, .sort() takes a compare function to sort the elements. But empty items or undefined are not invoked. All undefined and empty items are sorted to the end of the array, with undefined comes first and then empty items.

console.log([1, , 3, undefined, 2].sort());
// [1, 2, 3, undefined, <empty>]

I did not know that arrays can have empty slots that is different from undefined. I thought undefined means empty previously. I cannot think of a way that empty arrays are useful, maybe except for a potential marginal gain in memory. Anything sparse arrays is useful for can be replaced by having undefined as the value. In addition, delete and in operation is not what developers are used to when dealing with arrays. I do not think using sparse arrays is a good practice.

Arrays are Objects

The reason why you can use delete and in to check whether an item is empty in an array, is because arrays are just objects. The index and value in an array is a key value pair. The array object implements some helper that comes with this object to support the array functionality. And in fact, functions, sets, or strings are also objects. They can inherit methods and properties.

let a = [1, 2, 3];
console.log(typeof a);
// 'object'

let f = () => {};
console.log(typeof f);
// 'object'

Therefore, when the length of the array is modified to be larger than original, the key does not exist yet, hence the value is empty and the array is sparse. On the other hand, when the length is shortened, the array object automatically clean up keys that are equal or larger than the length.

Some consequences of Array being an object are the time complexity of these functions. For example, accessing any value of an array by index is O(1) because it is just a key value store underlying. This is different from lists in some languages that are linked lists under the hood, linked list usually requires traversing the list from the start to access an item by index.

Another interesting consequences of time complexity is .pop() is O(1) while .shift() is O(n). .pop() removes the last item in the array, and .shift() removes the first item in the array. .pop() is O(1) because it only needs to remove the last item, and the length of the array is decremented by 1. .shift() is O(n) because it needs to remove the first item, and all the other items need to be shifted to one index earlier.

Copy of a Reference

In JavaScript, objects are neither pass by value nor pass by reference, it is called copy of a reference. I knew there is something different but never able to remember it until now.

First, in language with pointer, like c++, pass by value means the variable is copied to another place in the memory. Modifying it in the function does not change the original copy of the variable that is passed in. On the other hand, pass by reference means the variable is not copied, but the reference to the variable is passed in. Modifying it in the function will change the original variable outside the scope of the function.

For JavaScript, it is a bit different. It is called copy of a reference. Kind of like a mixture of the two. When you pass an object to a function, you are passing a copy of the reference to the object. This means if you change the properties of the object in the function, it will also change the object outside the function. But if you reassign the object to a new object, it will not change the object outside the function.

let a = [1, 2, 3];

console.log(a); // [1, 2, 3]

let foo = (arr) => {
arr[0] = 4;
console.log(arr); // [4, 2, 3]

arr = [5, 6, 7];
console.log(arr); // [5, 6, 7]
};

foo(a);

console.log(a); // [4, 2, 3]

I knew this before, but now I finally can remember it. This is important because some functions like .map() and .filter() will return a new array and do not modify the original array. But some functions like .sort() and .reverse() modify the original array.

Iterator and Generator

I learned about iterators and generators when I have to implement .entries(), .keys(), and .values(). I have never used them, but now I learned what they are, what iterators and generators mean.

Iterator is an object that has a next() method that returns an object with value and done properties. The value is the next value in the iteration, and done is a boolean that indicates whether the iteration is done.

  • .entries(): iterator that returns an array with the index and value of each item
  • .keys(): iterator that returns the index of each item
  • .values(): iterator that returns the value of each item

For example,

let a = ["a", "b", "c"];

let entries = a.entries();
console.log(entries.next()); // { value: [0, 'a'], done: false }
console.log(entries.next()); // { value: [1, 'b'], done: false }
console.log(entries.next()); // { value: [2, 'c'], done: false }
console.log(entries.next()); // { value: undefined, done: true }

let keys = a.keys();
console.log(keys.next()); // { value: 0, done: false }
console.log(keys.next()); // { value: 1, done: false }
console.log(keys.next()); // { value: 2, done: false }
console.log(keys.next()); // { value: undefined, done: true }

let values = a.values();
console.log(values.next()); // { value: 'a', done: false }
console.log(values.next()); // { value: 'b', done: false }
console.log(values.next()); // { value: 'c', done: false }
console.log(values.next()); // { value: undefined, done: true }

Generators on the other hand is a function that returns an iterator. It is defined with an asterisk function*. Inside a generator, you can use the yield keyword to return a value for each next() call on the iterator. The function will pause at the yield keyword, and resume when the next() method is called again. It took me a while to understand how it works, but it is a powerful tool to create custom iterators. I used generators to implement all 3 functions.

For example, to implement something that works like .values() is just:

function* values(array) {
let i = 0;
while (i < array.length) {
yield array[i++];
}
}

Generators is a very simple and neat way to create iterators. One tricky thing when implementing the three iterators is sparse arrays are treated as undefined. So .keys() will return the index of the empty items, and .values() will return undefined for the empty items.

Using .call()

While implementing functions like .map() that takes a callback function, I realize it also take a second argument that is call thisArg. For example, given a function foo used in array.map(foo), if you call this in the callback function foo, the value of this is undefined. And in non-strict environment undefined will be replaced by globalThis. You may read more about it in the MDN docs.

I do not care that much about the correctness of the this value here, but at least I do want to know how to specify a this value when calling a function. I discovered three functions provided by the Function object prototype, .call(), .apply() and .bind(). All functions in JavaScript inherit these three methods. Out of all the tutorial I read, I think W3Schools has the most clear and concise explanation you can check:

  • .call(): Calls a function with a given this value and arguments (W3Schools)
  • .apply(): Calls a function with a given this value and arguments as an array (W3Schools)
  • .bind(): Returns a new function with a given this value (W3Schools)

And this is how I make use of .call() in my implementation:

// For example .map()
(array, callback, thisArg) => {
let output = [];

// loop the array
let i = 0;
while (i < array.length) {
// Skip invoke on empty element, but still create empty element
if (i in array) {
// Using call to specify thisArg
// instead of just callback(array[i], i, array)
output[output.length] = callback.call(thisArg, array[i], i, array);
} else {
output.length++;
}
i++;
}

return output;
};

I did not test it and I doubt it actually works because I switched my implementations to be a module with functions rather than methods in a class (I will explain below). The this object might not apply in that case. But nevertheless, I learned about .call(), .apply() and .bind().

Equality

As you might have already known, what is considered "equal" in JavaScript is tricky. There are loosely equal (==) and strictly equal (===). In these Array functions, there are also subtle differences between some functions when talking about what is "equal" to what. One examples is using .indexOf() and .includes() to find an element in an array.

There are a few ways to determine if an element is in array. You can use .find() or .findIndex() which takes a callback testing function to determine if the element is what you are looking for, which you can control how the equality comparison works. But if you do not want to write a callback, you can use .indexOf() to find an index that equals to the provided value (-1 if not found). Or, you can use .includes() to check if the array includes the provided value (false if not found). However, there is a subtle difference between the equality in .indexOf() and .includes().

.indexOf() uses strict equal (===). Strict equality (===) is different from equality (==) where types are also checked. Additionally, unlike null or undefined, NaN is not strictly equal to itself. So searching for NaN in an array with .indexOf() will always be -1. This design where NaN is not equal to itself is consistent with a lot of other languages and follows the IEEE 754 Standard.

null === null; // true
undefined === undefined; // true
NaN === NaN; // false

[NaN].indexOf(NaN); // -1

However, .includes() uses same-value-zero equality. Values of zero are all considered to be equal, where 0 is equal to -0 and NaN is also equal to NaN. I referenced the implementation of sameValueZero from the MDN docs linked above.

sameValueZero(0, 0); // true
sameValueZero(NaN, NaN); // true

[NaN].includes(NaN); // true

So if you need to check if there is NaN in an array, you should use .includes() instead of .indexOf(). If you want to read more about equality in JavaScript, this MDN page explains well with a table listed what is or is not equaled in each way of evaluating equality.

By the way, another subtle difference is .indexOf() skips empty slots in sparse arrays while .includes() treats empty slots as undefined. And all the find... methods like .find() and .findIndex() callback function is invoked for every index, unlike rest of the iterative methods like .map() and .forEach() that skips empty slots. Yet another edge cases with sparse arrays.

[,].indexOf(undefined); // -1
[,].includes(undefined); // true

Negative Indexes

Some other things I learned included most of the methods that take an index as input supports negative index. For example, you can do array.at(-1) to get the last element, but you cannot do array[-1] which is equivalent to array["-1"] (which most likely is undefined).

And each function also has special handling for index that is less than -array.length or greater than array.length. For example, .at() returns undefined if the index is outside the range from -array.length to array.length - 1.

There are more complicated examples like .slice(), which is a function takes a start index and end index and slice the array. If the start index is a negative index, it does the conversion to count from the end of the array. But if that start index is less than -array.length, it uses 0 as the start index. And vice versa for the end index, if greater than array.length it uses array.length as the end index. And for these functions that takes two indexes, it also has a special condition handling the order of the indexes. For .slice(), if the end index is earlier than the start index after the conversion, it returns an empty array. If this sounds confusing, I agree. I suggest read again in point form in the MDN docs.

Locales

I also learned about locale while reading about .toLocaleString(). It helps convert number, date, letter formats, etc. Probably going too far away from Arrays, but still, I didn't know JavaScript supports this many variations.

[1, 2, 3].toLocaleString("en-US", { style: "currency", currency: "USD" });
// '$1.00,$2.00,$3.00'
[1, 2, 3].toLocaleString("en-US", { style: "currency", currency: "AUD" });
// 'A$1.00,A$2.00,A$3.00'
[1, 2, 3].toLocaleString("en-US", { style: "currency", currency: "EUR" });
// '€1.00,€2.00,€3.00'
[1, 2, 3].toLocaleString("en-US", { style: "currency", currency: "JPY" });
// '¥1,¥2,¥3'

For more you can read, again, MDN docs on the Intl global object.

Less known functions

There are a few functions that I think are less known. At least I have not used them before and would like to share.

.copyWithin()

copyWithin(target, start);
copyWithin(target, start, end);
  • It is used to copy elements with in the array by specifying the indexes
  • It takes three indexes as arguments: target, start and end index
  • It copies elements within start to end, to the target index
  • It modifies the array and also return the resulting array
["a", "b", "c", "d", "e"].copyWithin(1, 3);
// Copy the element at index 3 to index 1
// ["a", "d", "c", "d", "e"]

["a", "b", "c", "d", "e"].copyWithin(1, 2, 5);
// Copy the elements index 2-5 to index 1
// ["a", "c", "d", "e", "e"]

I am not sure when is it useful. I guess when there is something need to rearrange elements within an array in such a way.

.splice()

splice(start);
splice(start, deleteCount);
splice(start, deleteCount, ...items);
  • It is used to delete some elements, and insert some new items at where elements are deleted
  • It is like a combination of .split(), .slice() and .concat()
  • It takes three arguments: starting index, delete count and list of new items
  • It modifies the array and return the deleted elements
  • If the delete count is not specified, it deletes till the end
const a = ["a", "b", "c", "d", "e"];
// Delete starting at index 1 and till the end
console.log(a.splice(1)); // ['b', 'c', 'd', 'e']
console.log(a); // ['a']

const b = ["a", "b", "c", "d", "e"];
// Delete 2 element starting at index 1
console.log(b.splice(1, 2)); // ['b', 'c']
console.log(b); // ['a', 'd', 'e']

const c = ["a", "b", "c", "d", "e"];
// Delete 2 element starting at index 1
// And insert "m" and "n" in to where elements are deleted
console.log(c.splice(1, 2, "i", "j", "k")); // ['b', 'c']
console.log(c); // ['a', 'i', 'j', 'k', 'd', 'e']

This is a useful and efficient way to remove or insert items in the middle of the array, despite its functionality is not obvious from the name.

.shift() and .unshift()

  • .shift() is like .pop() but removes the first element instead of the last element
  • .unshift() is like .push() but inserts the elements at the beginning of the array
  • I also learned that both .unshift() and .push() can take more than 1 element
  • And I also learned that both .unshift() and .push() returns the new length of the array
const a = [1, 2, 3, 4, 5];
console.log(a.pop()); // 5
console.log(a); // [1, 2, 3, 4]

const b = [1, 2, 3, 4, 5];
console.log(b.shift()); // 1
console.log(b); // [2, 3, 4, 5]

const c = [1, 2, 3, 4, 5];
console.log(c.push(6, 7)); // 7
console.log(c); // [1, 2, 3, 4, 5, 6, 7]

const d = [1, 2, 3, 4, 5];
console.log(d.unshift(-1, 0)); // 7
console.log(d); // [-1, 0, 1, 2, 3, 4, 5]

And because how arrays are objects behind the scene, .pop() is O(1) but .shift() is O(n), as explained above.

.toReversed(), .toSorted() and .toSpliced()

There are .reverse(), .sort(), and .splice() that modify the array in place. There are also .toReversed(), .toSorted() and .toSpliced() that are functionally the same but create a new array instead of modifying the original one.

There are also subtle difference in the to... version. These methods never produce sparse array. They convert empty slots into undefined, while the original .reverse(), .sort() and .splice() do not. I really wanted to know why there is this difference and I went back to the original TC39 array by copy proposal. In the end, I found this answer. In summary, these functions either are consistent with their counterpart that treats empty slots as empty slots, or be consistent with all other features since ES6 where they do not produce sparse arrays. So either way is breaking consistency, and they decided to stay consistent with the ES6 standard.

[1, 2, 3, ,].reverse(); // [<empty>, 3, 2, 1]
[1, 2, 3, ,].toReversed(); // [undefined, 3, 2, 1]

There are repeated discussion about these inconsistencies in the proposal, it's an interesting read. It shows how many problems sparse arrays create for JavaScript language designers.

.with()

with(index, value)
  • .with() creates a new array with 1 element changed
  • It is like the copying version of the bracket assignment array[3] = 'a' notation
  • It is added as part of the same array by copy proposal, but not using the same to... naming pattern, and people did ask why
const a = [1, 2, 3];
const b = a.with(1, 0);
console.log(a); // [1, 2, 3]
console.log(b); // [1, 0, 3]

Implementation Considerations

So that is mostly the things that I learned while working on this little challenge. If you still find it interesting, here is how I actually did it, and how I test it to ensure at least I have some certainty a majority of the functionality correct.

Here is the link to the repo if you are interested.

The setup

The major change I made is I wrote an object (MyArray) that contains functions as parameters (e.g. MyArray.at(array, 0)), rather than defining those functions as methods of an object (e.g. array.at(0)). Therefore, the first argument of all the functions is always the input array. With that, I do not need to override the original methods. The built-in Array class is still used to construct the array, but none of the prototype functions are modified or used.

const array = ["a", "b", "c"];

// original function
const expected = array.join();

// my function
const actual = MyArray.join(array);

This makes it easier to test too. The actual value can be directly compared with the expected value.

For functions that modify the array instance, I can use Array.from() to create copies and compare.

const array = ["c", "b", "a"];
const expected = Array.from(array);
const actual = Array.from(array);

expected.sort();
MyArray.sort(actual);

I used bun to run and test the code. It is fast, simple to use, support TypeScript, and provide a test runner out of the box.

In the project, I have to rely on some basic properties of the Array object. I need to get and set the .length property. As far as I can tell, that is the only way to get the length and manipulate the length. I use angle brackets [] to access and assign items in the array. Also, with the in and delete operator to check if an item exists and remove the items respectively. And last but not least, while to loop through the array.

At first, I thought of using Array.reduce to replicate all functionality of other functions. It is inspired by functional programming language that a reduce should be able to replicate all operations done recursively on a list. But once I learned about sparse arrays and copy of a reference, I realize it is near impossible if not actually impossible to do so. So I changed my mind and use a while loop instead. Why not for loop you may ask? Because then there is for ... of loop and that is using the iterator behind the scene without the need of an index. Just for the implementation to be more simple and raw, I decided to iterate with an index instead, using a single keyword, while. And thus hopefully a more catchy blog title :P

As I said, most of the implementation are just tedious. They are especially tedious when multiple functions share similar functionalities.

Similar functions

.find(), .findIndex(), .findLast(), and .findLastIndex() are all very similar. The difference is only whether it loops from the start or from the end, and whether it returns the element or the index once the callback return true.

{
find: <T>(
array: Array<T>,
callback: (element: T, index: number, array: Array<T>) => boolean,
thisArg?: any
) => {
let i = 0;

// loop the array
// find does not skip empty value, but behave the same as undefined
while (i < array.length) {
const result = callback.call(thisArg, array[i], i, array);

if (result) {
return array[i];
}
i++;
}

return undefined;
},
}

Once I learned iterators and generators, .entries(), .keys(), and .values() are very similar. The only difference is the return value.

{
entries: function* entries<T>(array: Array<T>): ArrayIterator<[number, T]> {
let i = 0;
while (i < array.length) {
yield [i, array[i++]];
}
},
}

.some() and .every() is like the opposite of each other. They both skip empty element in sparse array. And one key thing is they both return early when .some() finds true, or .every() finds false.

{
every: <T>(
array: Array<T>,
callback: (element: T, index: number, array: Array<T>) => boolean,
thisArg?: any
) => {
let i = 0;
let every = true;

// loop the array
while (i < array.length) {
// Skip empty element
if (i in array) {
const result = callback.call(thisArg, array[i], i, array);

if (!result) {
// If one is false, stop looping
every = false;
break;
}
}
i++;
}

return every;
},
}

Despite some differences on what is returned, iterative functions like .filter(), .forEach(), .map(), .reduce(), and .reduceRight() are all similar. They all skip empty elements in sparse array, and use callback.call() to invoke the callback. The only difference is handling the output returned.

{
filter: <T>(
array: Array<T>,
callback: (element: T, index: number, array: Array<T>) => boolean,
thisArg?: any
) => {
let i = 0;
let output: Array<T> = [];

// loop the array
while (i < array.length) {
// Skip empty element
if (i in array) {
const result = callback.call(thisArg, array[i], i, array);

if (result) {
output[output.length] = array[i];
}
}
i++;
}

return output;
},
}

But out of all the functions, I find .flat() and .sort() particularly interesting to tackle.

.flat()

.flat() is interesting because it is a very good application of recursion.

The .flat() function takes an optional depth argument that defaults to 1, specifying how many layer to flatten recursively.

{
flat: <T>(array: Array<T>, depth: number = 1): Array<T> => {
return doFlat([], array, depth);
},
}

I first created a doFlat helper function that takes 2 arrays and the depth as the arguments. The idea being the first array is the output array, and the second array is the input array getting flatten, and the depth is the current depth.

The doFlat function works like this:

  1. Loop through the array
  2. If the element is an array, and the depth is not 0 yet, recursively call doFlat with the same output array, the current element as the input array, and one less depth
  3. If the element is not an array, push it to the output
const doFlat = (output: Array<any>, elements: Array<any>, depth: number) => {
let i = 0;
while (i < elements.length) {
if (i in elements) {
if (Array.isArray(elements[i]) && depth > 0) {
doFlat(output, elements[i], depth - 1);
} else {
output[output.length] = elements[i];
}
}
i++;
}

return output;
};

If you think of the nested array as trees, this is like a depth first search approach. For every array, flatten it until the specified depth, then continue to append to the output. I personally find this solution quite simple and elegant.

.sort()

.sort() sorts the array in place, i.e. it modifies the original array. So, quick sort is a very good algorithm for this. It does not require extra memory space but only swapping elements within the array.

The sort function has a default compare function, that is based on the Unicode order of the string. So 7 comes before 80, but 80 actually comes before 9 by default. This is because when numbers are converted to string, the Unicode of "80" is earlier than "9". So, I first need to recreate this default compare function in case the compare function is not specified. Fortunately, I can reuse the String.localeCompare() method.

Then for sorting, I also used a helper function doSort for recursively sorting the array. It takes 4 arguments, the array to sort, the compare function, the starting index and the ending index of range within the array to sort.

  sort: <T>(array: Array<T>, compareFn?: (a: T, b: T) => number): Array<T> => {
if (!compareFn) {
compareFn = (a: T, b: T) => {
const aStr = `${a}`;
const bStr = `${b}`;

return aStr.localeCompare(bStr);
};
}
doSort(array, compareFn, 0, array.length - 1);
return array;
},

If you want to know more about Quick sort, there are plenty resources and examples on the Internet. The higher level idea is to pick a pivot element, then partition the current array in to two, with one partition containing elements lower and the other partition contains elements higher than the pivot. Then recursively do the same algorithm for each of the two partitions. The cool thing is this operation can be done without creating extra arrays, it is only swapping element around within the array, and keep tracking the indexes.

I did not do much optimization, so this is certainly nowhere near the best implementation. The only thing I did is to pick a random pivot. This can lower the chance of the two partitions having a very different size. Partitions that are not in similar size may lead to poorer performance because more comparison and swapping are needed.

Before the algorithm starts, the pivot element is moved to the end of the range. Then each of the element in the range is compared to this pivot element. If the compare function return a value smaller than 0, meaning the element is smaller than the pivot, we swap element at the pivot and the current element and increment the pivot index. This brings the current element to the earlier partition. Once we finish looping the array, we swap the pivot element (at the end of the range) with the pivot index to bring the pivot element in between the two partitions. Then recursively do the same sort for the two partitions. You may find it easier to understand by watching animations or reading the code below.

There are also one more tricky thing to handle, briefly mentioned above. Elements that are undefined or empty will not invoke the compare function. They are always sorted to the end of the array, where all undefined comes first, then all empty element at the end. So the first four big if in the doSort function is handling undefined and empty items. Only when neither of the element being compared nor the pivot element is undefined or empty, the compare function is invoked.

const doSort = <T>(
array: Array<T>,
compareFn: (a: T, b: T) => number,
startIndex: number,
endIndex: number
) => {
const length = endIndex - startIndex + 1;
if (length <= 1) {
return;
}

const randomIndex = Math.floor(Math.random() * length) + startIndex;
swap(array, randomIndex, endIndex);

let currentIndex = startIndex;
let pivotIndex = startIndex;
while (currentIndex < endIndex) {
// pivot is empty, current in front, so swap
if (!(endIndex in array)) {
swap(array, currentIndex, pivotIndex);
pivotIndex++;
currentIndex++;
continue;
}

// current is empty but pivot is not, do nothing
if (!(currentIndex in array)) {
currentIndex++;
continue;
}

// pivot is undefined but current is defined, current in front, so swap
if (array[endIndex] === undefined) {
swap(array, currentIndex, pivotIndex);
pivotIndex++;
currentIndex++;
continue;
}

// current is undefined but pivot is not
if (array[currentIndex] === undefined || array[endIndex] === undefined) {
currentIndex++;
continue;
}

// neither are empty or undefined
const compare = compareFn(array[currentIndex], array[endIndex]);

if (compare < 0) {
swap(array, currentIndex, pivotIndex);
pivotIndex++;
}

currentIndex++;
}

swap(array, pivotIndex, endIndex);

doSort(array, compareFn, startIndex, pivotIndex - 1);
doSort(array, compareFn, pivotIndex + 1, endIndex);
};

And the swap function is also a helper function I created to swap two elements in the array. It has special cases to handle swapping empty items. This swap function is also used in .reverse() to reverse the array by swapping elements symmetrically.

const swap = <T>(array: Array<T>, i: number, j: number) => {
if (i === j) {
return;
}

const firstTemp = array[i];
const isFirstInArray = i in array;
const isSecondInArray = j in array;

if (isSecondInArray) {
array[i] = array[j];
} else {
delete array[i];
}

if (isFirstInArray) {
array[j] = firstTemp;
} else {
delete array[j];
}

return array;
};

For the rest, you can check the repository for the implementation. Rest of them are mostly simple copying and modifying the length of the array, or some variations of the above. The most complicated one would be .splice() which requires some more thought into copying what are added and deleted and moving the items around in the array.

Just a disclaimer, this is just a fun challenge I set for myself. I am not confident that the implementation is absolutely correct, and I am sure you could find improvements in both time and memory complexity if you wanted to.

Final thoughts

Despite this exercise being a bit tedious. I think it is worth the effort. Especially with more AI tools and auto-completion, getting to know the fundamentals of a language is important. I do not ever think sparse array will be useful, but knowing that it exists and why it exists is helpful.

Building something from scratch, without the help of AI autocompletion, feels like it retrained my brain in designing algorithms and debugging logic errors. It brings me back to the time when I first learn about programming, when off by 1 error and infinite loop is the most frequent mistake I make. I enjoyed the iterative process of running and testing my code to validate the algorithms.

People often say how unreasonable and poorly designed the JavaScript language is, which I agree. JavaScript's language design contains inconsistencies, for example, all the edge cases around empty slots and sparse arrays. Inconsistencies cause surprises, which means code are harder to reason about, harder to maintain and debug. However, to me, this exercise provided an opportunity for me to experience the inconsistencies. I am not reading about it from forums or memes. I actually experienced it and I vividly remember how they are handled. That is a rewarding thing out of all the tedious in checks.

Other than the inconsistencies, I notice one major theme in JavaScript is the language assume the user is correct and avoid throwing errors. Throughout all implementations, there are only 2 cases where it will throw an error. First is when .reduce() or .reduceRight() is called with an empty array plus no initial value given, a TypeError is thrown because nothing can be reduced or returned. The second is when .with() is called with an index that is not within the -array.length to the array.length range, a RangeError is thrown. I think the .with() function throwing for RangeError is a better design choice compare to other functions that tries to mask the error and just return a default like an empty array or undefined. If something is unexpected, like accessing items outside the range of indexes, clearly the user made a mistake. In my opinion, throwing an error is better than letting the program continues with hard coded default values. I understand the flexibility and potential gain in user experience when used in client side applications, but that is just masking the problem and kicking the can down the road.

I hope you learned something new about JavaScript array by reading this. And maybe this inspires you to try something out even if it seems trivial. You might learn something new on the way!

References

Docs

Time complexity of all Array functions

Maybe this is useful in the future, while I still remembers

  • n denote the length of the original array
  • k usually denote the input array length
FunctionsTime Complexity
atO(1)
concatO(n) where n being the length of the second array
copyWithinO(n) or less where n being the number of items to copy
entriesO(1) where each iteration is O(1)
everyO(n) or less
fillO(n) or less
filterO(n) if callback is O(1)
findO(n) if callback is O(1)
findIndexO(n) if callback is O(1)
findLastO(n) if callback is O(1)
findLastIndexO(n) if callback is O(1)
flatO(n ^ d) in the worst case where d is the depth
flatMapO(n) if callback is O(1)
forEachO(n) if callback is O(1)
includesO(n) or less
indexOfO(n) or less
joinO(n)
keysO(1) where each iteration is O(1)
lastIndexOfO(n) or less
mapO(n) if callback is O(1)
popO(1)
pushO(k)
reduceO(n) if callback is O(1)
reduceRightO(n) if callback is O(1)
reverseO(n)
shiftO(n)
sliceO(n) or less
someO(n) or less
sortO(n log n) on average
spliceO(n + k) or less
toLocaleStringO(n)
toReversedO(n)
toSortedO(n log n)
toSplicedO(n + k)
toStringO(n)
unshiftO(n + k)
valuesO(1) where each iteration is O(1)
withO(1)
  • For flat, that is just the absolute worse case scenario where n is the average length of every array exists, while each array contains n number of arrays within, like an exponential tree
  • push, splice, toSpliced and unshift depend on the input size k of the elements to be added to the array
  • For toLocaleString and toString, I am not sure, but I assume the same as join where it accesses each element and concatenate to a string, so O(n)