# HackerRank Implement Queue using two stacks Solution

Challenge

Problem Statement – Implement a Queue using two Stacks

I implemented dequeue() in $$O(1)O(1)$$ at the cost of enqueue() in $$O(n)O(n)$$

#include <cmath>
#include <cstdio>
#include <vector>
#include <iostream>
#include <algorithm>
#include <stack>
using namespace std;

struct Queue{
stack<int> s1,s2;

int dequeue(){
int x = s1.top();
s1.pop();
return x;
}

void peek(){
cout<< s1.top()<<"\n";
}

void enqueue(int data){
while(!s1.empty()){
s2.push(s1.top());
s1.pop();
}

s1.push(data);

while(!s2.empty()){
s1.push(s2.top());
s2.pop();
}
}
};

int main() {
int t; cin>>t;
Queue q;

while(t--){
int k; cin>>k;
switch(k){
case 1:
int x; cin>>x;
q.enqueue(x);
break;
case 2:
q.dequeue();
break;
case 3:
q.peek();
}
}
return 0;
}

There are 15 test cases. This code runs properly for the first 5 test cases then I get a ‘Time Limit Exceeded’ message for the rest of test cases.

This means the complexity of my program is too much.

So I tried reversing it, I made enqueue() as $$O(1)O(1)$$ and dequeue() as $$O(n)O(n)$$ but even that did not change anything.

What should I do to reduce time complexity of this code?

# Algorithmic complexity for combined operations

Containers are interesting elements in most programming languages: they have an internal state, namely their elements and therefore their size.

This introduces an additional state compared to usual algorithmic asymptotical complexity analysis. For example, a naive implemented std::vector::push_back will yield $$O(n2)\mathcal O(n^2)$$ complexity if push_back increases the capacity only by one:

// pseudo-code, also very bad performance and no error handling, do not use!
void vector::push_back(T value){
if(_size == _capacity) {
// has to move all elements and is therefore O(n)
reserve(_capacity + 1);
}
_data[_size++] = value;
}

While trivial, this code has a severe problem: we have to copy all elements in ever iteration. If we use push_back in a loop, we end up with a quadratic complexity.

The real push_back has therefore some tricks up in its sleeves, as the standard dictates that it will have $$O(1)O(1)$$ amortized complexity. More about that later in this review.

However, this small introduction should give you a hint about the central flaw in your current approach.

## A hidden quadratic term

I implemented dequeue() in $$O(1)O(1)$$ at the cost of enqueue() in $$O(n)O(n)$$

For this task, it’s much more important to see the cost on both functions for $$kk$$ enqueued elements, not a single one.

So let’s envision the queue {1,2,3,4}. How many steps do we need?

to enqueue 1:
1. put 1  in s1
to enqueue 2:
1. move 1 from s1 to s2
2. put 2 in s1
3. move 1 from s2 to s1
- to enqueue 3:
1. move 1 from s1 to s2
2. move 2 from s1 to s2
3. put 3 in s1
4. move 2 from s2 to s1
5. move 1 from s2 to s1
- to enqueue 4:
1. move 1 from s1 to s2
2. move 2 from s1 to s2
3. move 3 from s1 to s2
4. put 4 in s1
5. move 3 from s2 to s1
6. move 2 from s2 to s1
7. move 1 from s2 to s1

We see that the $$nn$$th element will take $$2(n−1)2(n-1)$$ swaps. Therefore, if we insert a total of $$nn$$ elements into our queue, we end up with $$O(n2)\mathcal O(n^2)$$ to complete all enqueues.

We need to do better.

## A better queue with two stacks

So let’s get back to the drawing board. What do we need?

1. We need to enqueue
2. We need to dequeue
3. We need to peek.

None of those terms indicates that we need to have all data in one stack. So first of all, we need to use better names, because s1 and s2 are pretty bland and non-descriptive:

class Queue{
stack<int> in;
stack<int> out;

void flip();   // < new

public:
void enqueue(int);
int  dequeue();
int  peek();
};

I’ll implement the methods outside of the class declaration to keep the code segments short, but you’re free to place them inline again. Note that I switched from a struct to a class, because its Queues job to make sure that in and out are handled correct; no one else should be able to change them.

What will we use in and out for? Well, as long as we have elements in out, we will use them for dequeue and peek. And whatever gets enqueued gets pushed right ontop of in, no questions asked.

The critical part is how to get an element from in to out, right? I let you think about that for some paragraphs, but you may also stop here and try it yourself; the declaration above contains a clue.

So, let’s have a look at my proposal for the definition of enqueue(), peek and dequeue():

void Queue::enqueue(int value) {
in.push(value);
}

int Queue::peek() {
if (out.empty()) {
flip();
}
return out.top();
}

int Queue::dequeue() {
if (out.empty()) {
flip();
}
int value = out.top();
out.pop();
return value;
}

Note that the additional private method flip is yet missing. However, I will state the envisioned complexity:

• enqueue is $$O(1)\mathcal O(1)$$ (amortized)
• dequeue is $$O(1)\mathcal O(1)$$ (amortized)
• peek is $$O(1)\mathcal O(1)$$ (amortized)

Intrigued? Great. So let’s check flip:

void Queue::flip() {
while(not in.empty()) {
out.push(in.top());
in.pop();
}
}

flip takes all elements in in and moves them to out. It thereby changes the order of the elements, so that the top in in will be the last to get moved out of out. Note that we call flip only when out is empty.

## Amortized analysis

*”Wait a second! That’s $$O(n)\mathcal O(n)$$” I hear you say. And that’s completely correct. However, how often do we need to call flip? Or, rather more important, how often do elements get moved?

The answer on the latter question is: exactly one time from in to out. At no point will they move back. The number of flip calls is much trickier, though, but it doesn’t really matter. At worst, flip may need to flip all elements, for example if we use it as follows:

for(int i = 0; i < 100; ++i) {
queue.enqueue(i);
}
for(int i = 0; i < 100; ++i) {
queue.dequeue(); // first dequeue will flip
}

However, only the first dequeue will flip. All others will yield the element immediately. Therefore, if we enqueue $$nn$$ elements and then dequeue all of them, we end up with dequeue()‘s complexity as:

$$nO(1)+O(n)n=O(1).\frac{n\mathcal O(1) + \mathcal O(n)}{n} = \mathcal O(1).$$

We can compare that with your original variant:
$$nO(n)n=O(n)\frac{n\mathcal O(n)}{n} = \mathcal O(n)$$

This is why your code exceeded the time limit.

# Further review on code

There are some other findings that don’t need as much detail as the algorithm, but nonetheless can be improved:

• there are some unused includes (<vector>)
• using namespace std is considered bad practice
• the names are misleading or at least not self-descriptive
• peek() usually returns a value and does not print
• the struct Queue should have been a class Queue due to the assumption that the elements are always in the right order.