Given sequence: 20, 24, 30, 35, 50

Ascending order: 20, 24, 30, 35, 50

Merge(20, 24) → new size (44)

Number of comparisons = 20 + 24 – 1 = 43

Sequence: 30, 35, 44, 50

Merge(30, 35) → new size (65)

Number of comparisons = 30 + 35 – 1 = 64

Sequence: 44, 50, 65

Merge(44, 50) → new size (94)

Number of comparisons = 44 + 50 – 1 = 93

Sequence: 65, 94

Merge(65, 94) → new size (150)

Number of comparisons = 65 + 94 – 1 = 158

Therefore, total number of comparisons, 43 + 64 + 93 + 158 = 358Option 3 : t_{1} > t_{2}

__Concept:____ __

If the array is sorting is increasing or non-increasing order, choosing either first or last element given the worst-case time complexity in Quick Sort.

__Explanation __

In every step of quick sort, numbers are divided as per the following recurrence.

T(n) = T(n-1) + O(n)

Average case time complexity (t2) = O(logn)

Worst case time complexity (t1) = O(n^{2})

Since t1 has the worst case ∴ t1 > t2

__Important Point:__

Number of comparisons will vary based on implementation

Always in ascending or descending array (worst case) number of comparisons is higherSince the given array is sorted

Let given array be **1**, 2, 3, 4, 5, 6, 7, …. 25

Probability of choosing each element is \(\frac{1}{25}\)

Worst case for quick sort will only we arise if 1 is chosen or 25 is chosen as pivot element

Assume 1 is chosen as a pivot element

After first round of partitioning, pivot will be in its same position (1^{st} position), this gives rise to worst case in quick sort since the complexity will be O(n^{2}).

Assume 25 is chosen as a pivot element

After first round of partitioning, pivot will be in its same position (last position), this gives rise to worst case, in quick sort since the complexity will be O(n^{2}).

P(X = 1) =\(\frac{1}{25}\) = 0.04

and P(X = 50) = \(\frac{1}{25}\) = 0.04

P(X = 1 or X = 25)

= 0.04 + 0.04

**= 0.08**

Option 1 : Ω (log n)

Consider complete binary tree where left and right subtrees of the root are max-heaps given below

To convert the tree into a heap which is possible by calling MAX-HEAPIFY at root. MAX- HEAPIFY operation takes time as the height of the tree. i.e. if we have n elements in the tree then log(n) is the height of the tree.

**Step 1:** Swap 10 and 40

**Step 2: **swap 10 and 25

The above tree is a MAX-HEAP

To convert this into a max heap it takes only 2 swap and 2 comparison which is nothing but the height of the tree. So, log(n) time is required to convert the tree into a heap.

The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order, using bubble sort is

Option 4 : 10

Bubble sort repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.

Array elements: 8, 22, 7, 9, 31, 5, 13

1^{st} pass = 8, **7**, **9**, **22**, 5, **13**, 31

4 swaps

2^{nd} pass = **7**, 8, 9, **5**, **13**, 22, 31

3 swaps

3^{rd} pass = 7, 8, **5**, 9, 13, 22, 31

1 swap

4^{th} pass = 7, **5**, 8, 9, 13, 22, 31

1 swap

5^{th} pass = **5**, 7, 8, 9, 13, 22, 31

1 swap

Since array is sorted after 5^{th} pass

∴ no further swaps possible

Total number of swaps = 4 + 3 + 1 + 1 + 1 = 10

Consider a hash table with 9 slots. The hash function is ℎ(k) = k mod 9. The collisions are resolved by chaining. The following 9 keys are inserted in the order: 5, 28, 19, 15, 20, 33, 12, 17, 10. The maximum, minimum, and average chain lengths in the hash table, respectively, are

Option 1 : 3, 0, and 1

**Concept:**

Keys: 5, 28, 19, 15, 20, 33, 12, 17,

10.

ℎ(k) = k mod 9

Chaining

Maximum chain length = 3 (28 -> 19 -> 10)

Minimum chain length = 0 (0, 4, 7 slot doesn’t have any element)

\({\rm{Average\;chain\;length\;}} = \frac{{0 + 3 + 1 + 1 + 0 + 1 + 2 + 0 + 1}}{9} = \frac{9}{9} = 1\)

Hence option 1 is the correct answer.

**Concept:**

Min- heap is a binary tree such that data contained in each nodes is less than (or equal to ) the data in that node’s children.

We have to consider the integers in the range [1, 1023].

It is also given that root is at depth 0. As it is a min heap so integer 1 has to be at root only.

Tree will become:

Now, once we place 1 – 9 then remaining elements can be placed easily to fill the min-heap.

Here, the maximum depth at which integer 9 can appear is 8.

Option 3 : Double hashing

**Primary clustering:**

- It is one of two major failure modes of open addressing based hash tables, especially those using linear probing.
- It occurs after a hash collision causes two of the records in the hash table to hash to the same position, and causes one of the records to be moved to the next location in its probe sequence.

**Secondary clustering:**

Secondary clustering occurs more generally with open addressing modes including linear probing and quadratic probing in which the probe sequence is independent of the key, as well as in hash chaining.

**Double hashing:**

Double hashing is a computer programming technique used in conjunction with open-addressing in hash tables to resolve hash collisions, by using a secondary hash of the key as an offset when a collision occurs.

**Double hashing technique is free from Clustering problems**

Consider a hash table of size 7, with hash function H (k) = k % 7, and pseudo random i = (i + 5) % 7. We want to insert the following keys one by one from left to right.

15, 11, 25, 16, 9, 8, 12

What will be the position of the key 25, if we use random probing?Option 4 : 2

Since we are using random probing:

__Insert 15:__

(15)%7 = 1

__Insert 11: __

(11)%7 = 4

__Insert 25: __

(25)%7 = 4 // collision:

i = 4

i = (i + 5) % 7 // using random function

i = (4 + 5)%7 = 2

Hence 25 position is 2^{nd}

Option 2 : 4, 7, 10, 23, 67, 12, 5

**Concept:**

Insertion sort is the sorting mechanism where the sorted array is built having one item at a time. The analogy can be understood from the style we arrange a deck of cards. This sort works on the principle of inserting an element at a particular position, hence the name Insertion Sort.

Insertion Sort works as follows:

- The first step involves the comparison of the element in question with its adjacent element.
- And if at every comparison reveals that the element in question can be inserted at a particular position, then space is created for it by shifting the other elements one position to the right and inserting the element at the suitable position.
- The above procedure is repeated until all the element in the array is at their apt position.

**Explanation: **

Array: 10, 4, 7, 23, 67, 12, 5

After 1^{st} pass: 4, 10, 7, 23, 67, 12, 5 – 4 and 10 got swapped or 4 got inserted at its apt position

After 2^{nd} pass: 4, 7, 10, 23, 67, 12, 5 – 7 got inserted at its apt position

Assume that a mergesort algorithm in the worst case takes 30 seconds for an input of size 64.

Which of the following most closely approximates the maximum input size of a problem that can be solved in 6 minutes?Option 2 : 512

**Concept:**

Merge sort algorithm worst case time complexity is O (nlogn)

**Data:**

T_{1} (n) =30 seconds, n_{1} = 64

T_{2} (n) = 6 minutes

**Formula:**

T(n) = nlog_{2}n

**Calculation:**

30 = c × 64 log_{2}(64)

30 = c × 64 log_{2}(2^{6})

30 = c × 64 × 6

\(c = \frac{5}{{64}}\)

6 × 30 = c n_{2} log_{2}n_{2}

\(6 \times 60 = \frac{5}{{64}} \times {n_2}{\log _2}{n_2}\)

**∴ ****n _{2} = 512**

The number of possible min-heaps containing each value from {1, 2, 3, 4, 5, 6, 7} exactly once is _____.

case 1: 2 and 3 are on 2nd level.

so, 4,5,6 and 7 can occupy any of 4 positions in the 3rd level as all are less than 2 and 3. So, 4! arrangements. And 2 and 3 can be arranged in 2 ways in 2nd level.

2*24=48

case 2**: **2 and 4 are on 2nd level

Now, 3 can only be below 2 and not 4 as it is MIN heap. So, 2 cases are possible 3XXX or X3XX, which gives 3!+3!=12 arrangements. And 2 and 4 can be arranged in 2 ways in 2nd level.

2*(6+6)=24

case 3**: **2 and 5 are on 2nd level

Now, 3 and 4 can only be below 2 and not 5 as it is MIN heap. So, 2 arrangements are possible for 3 and 4 and similarly 2 for 6 and 7. And 2 and 5 can be arranged in 2 ways in 2nd level**.**

2*(2+2)=8

Option 3 : Detects whether the input is already sorted

The correct answer is **option 3**

__Key Points__

- The
**bubble sort**algorithm works by repeatedly swapping adjacent elements that are not in order until the whole list of items is in sequence. - If sorting is completed in a single iteration, it can be detected that the input is already sorted.

__Additional Information__

- When the elements of input are already sorted, the bubble sort gives the
**best time complexity O(n).** - Bubble sort is typically slower due to it's iterative behaviour.

Option 4 : Bubble

Heap Sort | It arranges the values in a heap and sort them. |

Selection Sort | It starts with finding the min value and place in its perfect place each time ,in this way it sorts all values. |

Insertion sort | Divides the list in two parts sorted part and unsorted part,then keed each element in sorted part at its perfect place. |

Bubble sort | It compares two adjoining values and exchange them if they are not in proper order. |

Option 3 : Merge sort

**Heapsort**:

Heap data structure is an array object that can be viewed as a nearly complete binary tree. A heap can be a max or a min-heap. In a max heap, the maximum element will be the root of the tree, and in min heap minimum element will be the root of the tree. To make the right element as the root is heap sorting. It does not use recursion. It is based on the concept of a priority queue.

**Bubble sort: **

It works by repeatedly moving the largest element to the highest index position of the array. It needs swapping of the elements. It compares the adjacent elements and swaps their positions if they are not in order. Order can be ascending or descending.

**Insertion sort:**

In this, array elements are compared to sequentially and arranged in order. It places the unsuitable element into its right position. It repeatedly inserts an element in the sorted subarray to its left.

**Merge sort:**

It is a divide and conquers based algorithm. It uses recursion to sort the elements of the array. In this, the array is divided into two parts until we get the sorted subarray, then combine and sort them again and the final result will be the sorted array of elements.

Therefore merge sort algorithms users recursion.

Option 4 : O(log n)

**Binary search algorithm:**

Binary search algorithm is used to find an element in an already sorted array.

STEPS 1:

It finds the middle element of the array and compare the element with the element to be searched, if it matches then return true.

STEPS 2:

If not, then divide the array into two halves in which if element to be search is less than middle element then search takes place in left part otherwise search in right half.

STEP 3:

Repeat this process, until you get the element.

Explanation:

For worst case 52

Worst Case: Search 50 in below give array

11 |
12 |
15 |
24 |
35 |
50 |
51 |
63 |

\({\rm{midde\;index}} = \frac{{0 + 9}}{2} = 4\therefore {\rm{a}}\left[ 4 \right] = 35\)

50 > 32

\({\rm{midde\;index}} = \frac{{5 + 9}}{2} = 7\;\therefore {\rm{a}}\left[ 7 \right] = 63\)

50 < 63

\({\rm{midde\;index}} = \frac{{5 + 6}}{2} = 8\;\therefore {\rm{a}}\left[ 5 \right] = 50\)

matched

T(n) = O(log n)

Also, for average case:

T(n) = O(log n)In a min-heap, leaf nodes or the last level contains the maximum values of the array.

In a Min-heap having n elements, there are ceil (n/2) leaf nodes.

So, ceil (1023/2) = ceil (511.5) = 512 elements as leaf nodes.

For n elements in an array, n-1 comparisons are requires to minimum or maximum element.

Therefore, the minimum number of comparison required to find the maximum in the heap is 512-1 = 511.

Option 4 : m + n - 1

**Concept: **

Merge sort algorithm uses Divide and Conquer. Hence, we divide both our lists into m and n single-element lists respectively.

Now, we know that first m lists are sorted and after these m lists, the n lists are also sorted.

Let’s call these single element lists as m1, m2 and so on, and n1, n2 and so on.

To merge these lists into one single-sorted list, we compare

- m1 with n1.
- Either m2 with n1 or m1 with n2. Here, for sorting 3 elements, 2 comparisons are needed.
- Observe that, each element in subset of m needs to be compared with each element in subset of n. We don’t need to compare m1 with other m lists because they are sorted and same goes for n lists.
- We repeat the process until we have got a sorted list with m + n elements. This happens when we have made m + n – 1 comparisons.

Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?

I. Quicksort runs in Θ (n^{2}) time

II. Bubblesort runs in Θ (n^{2}) time

III. Mergesort runs in Θ (n) time

IV. Insertion sort runs in Θ (n) timeOption 4 : I and IV only

Input is already in ascending order, means it is the worst-case situation.

__Option 1:__

Quicksort runs in Θ (n^{2}) time

In quick sort worst case, first or last element is selected at the pivot element.

For a quicksort, in worst case recurrence relation will become T(n) = T(n-1) + T (1) + n

Which gives T(n) = Θ (n^{2})

So, it is correct.

__Option 2:__

Bubble sort runs in Θ (n^{2}) time

If array is already sorted in ascending order, then at that time there will be no swap after the completion of inner for loop of bubble sort. In this way, bubble sort will take Θ (n) time complexity.

__Option 3:__

Merge sort runs in Θ (n) time

Merge sort uses the divide and conquer policy to sort the elements. As elements are already sorted in ascending order. Recurrence relation for merge sort; T(n) = 2 T (n/2) + n

This will give Θ (nlogn) time complexity.

__Option 4:__

Insertion sort runs in Θ (n) time

When a new element which is greater than all the elements of the array is added, then there will be no swap but only a single comparison. In n -1 swaps, only 0 swaps and n-1 comparisons are there.

Total time complexity in this case will be Θ (n).

Worst case of the given problem is when a single 0 is followed by all 1’s i.e.

0111111111111……

Also, worst case can also be considered as when all 0’s are followed by single 1.

000000000………………1

Since, in both the cases there is a sorted sequence of 0 and 1 and for sorted sequence binary search is preferred.

At each stage, \(\frac{{low + high}}{2}\) element index is compared and if it is 1, search towards left and if it is 0 search towards right.

Total worst-case number of probes performed by an optimal algorithm is = \(lo{g_2}31\) = 5