$RUM60G1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Midterm – DSA 456 – July 12th, 2023

- Use this word document to provide ALL your answers (code and text)
- Exam starts at 7 and is done at 10pm.

Section 1 – Python (20 marks)

Q1.
Write a python function that get’s a list as an input and the midrange of its values. Midrange is the midway
point the minimum and the maximum of the data set
Example: midrange of [5, 4, 8, 2, 9, 1, 3] is 5 and median of [4, 5, 2, 1, 7, 7] is (7+1)/2 = 4 . You
can use python’s own sorting algorithm.
def find_midrange(lst):

sorted_lst = sorted(lst)

minimum_no = sorted_lst[0]

maximum_no = sorted_lst[-1]

midrange = (minimum_no +maximum_no) / 2

return midrange

Q2.
Write a python function that gets n and returns the nth value in the fun function sequence. Fun function
sequence is defined by
P(n) = 3*P(n-1) + 2*P(n-2) while P(0) = 0 and P(1) = 1. Use your code to generate and provide the first 10
values of fun function sequence.

def fun(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return 3 * fun(n-1) + 2 * fun(n-2)

first_10_values = [fun(n) for n in range(10)]


print(first_10_values)
Section 2 – function Analysis (30 marks)

Q3.
Run an analysis of the function developed in Q2 and find its T(n) and big O perforamance. Is finding the nth
element of Fun function sequence faster than bubble sort? Is it faster than finding the nth element of the
Fibonacci series?

Solution:

# Analysis of the function fun(n):

1. n = 0 and n = 1 return immediately, constant time.

2. For n = 2, fun(1) and fun(0) return immediately.

3. For n = 3, fun(2) requires two more recursive calls and fun(1) returns immediately.

4. For n = 4, fun(3) requires four more recursive calls and fun(2) requires two more recursive calls.

Therefore, the number of recursive calls doubles for each increase in n. In general, for n levels of recursion,
the number of recursive calls can be approximated by 2^n.

So, The time complexity of the function is T(n) = O(2^n).

The big O for the function would be represented by O(2^n).

Fun function sequence vs. Bubble sort:

Time complexity:

Fun function sequence: T(n) = O(2^n)

Bubble sort: T(n) = O(n^2)

Comparatively, the Fun function sequence grows exponentially with the input size, while Bubble sort grows
quadratically.

For larger values of n, calculating the nth element of the Fun function sequence is generally faster than
executing Bubble sort.

Fun function sequence vs. Fibonacci series:

Time complexity:

Fun function sequence: T(n) = O(2^n)

Fibonacci series (with dynamic programming): T(n) = O(n)


While both involve recursion, the Fibonacci series can be computed more efficiently using dynamic
programming, resulting in a linear time complexity. Therefore, for larger values of n, finding the nth element of
the Fibonacci series is generally faster than calculating the nth element of the Fun function sequence.

Q4.
What is the T(n) and big O performance of the following function? This function gets a value as an input and
prints all prime numbers between 1 and the input number. Math.sqrt takes the square root of a number
import math def
print_all_primes(n): for
num in range(1,n + 1):
isPrime = True for i in
range(2,int(1+math.sqrt(num))):
if num % i ==0 :
isPrime = False
break if isPrime:
print(num)

Solution:

Analyzing the time complexity of the print_all_primes function:


 The outer loop iterates from 1 to n, resulting in n iterations.
 The inner loop iterates up to the square root of each number which resulted in O(sqrt(n)) iterations.

Therefore T(n) = O(n * sqrt(n)).

In terms of the big O notation, we can simplify the expression to O(n^(3/2)), as the sqrt(n) term dominates the
n term in the time complexity.

Which one is faster? sorting a list of size n or to printing all prime numbers up to n? Answer based on
both bubble sort and merge sort.
1. Bubble Sort: Bubble sort has a worst-case time complexity of O(n^2).

2. Merge Sort: Merge sort has a worst-case time complexity of O(n log n).

3. Printing all prime numbers up to n: The T(n) for printing all prime numbers up to n is O(n *sqrt(n)).

Comparing the time complexities:

- Bubble Sort: O(n^2)

- Merge Sort: O(n log n)

- Printing Primes: O(n * sqrt(n))

In terms of time complexity, both sorting algorithms (bubble sort and merge sort) are slower than printing all
prime numbers up to n, as the time complexity of printing primes is lower than O(n^2) or O(n log n). Therefore,
for larger values of n, printing all prime numbers up to n would generally be faster than sorting a list using
either bubble sort or merge sort.
Q5.
Assume that that after analysing a recursive function to find it’s T(n) you find that T(n)
= 5 + 2 * T(n/4)
What is the big O performance of this function? Is it faster or slower than binary search? Is it faster or slower
than merge sort?
Hint: a ^ ( log x in base a^k) = x^(1/k)
After analyzing the relation, we notice that the function divides the problem size by a factor of 4 in each
recursive call. Therefore, we can approximate the big O performance of the function as O(log n) with a base of
4.
Binary Search:
The given recursive function (O(log n) base 4) is slower than binary search (O(log n) base 2) because binary
search performs fewer operations.
Merge Sort:
The given recursive function (O(log n)) is faster than merge sort (O(n log n)) in terms of the number of
operations performed. However, merge sort has better overall performance as it guarantees a sorted output.

Section 3 - Linked Lists (30 marks)

Q6.
What are advantages and disadvantages of using singly linked lists vs. doubly linked lists? Please elaborate

Solution:

Singly Linked List:


1. Traversal: In a singly linked list, traversal can only be done using the next node link. Each node
contains a reference only to the next node in the list. To traverse the list, you start from the head and
follow the next links until you reach the desired node or the end of the list.
2. Memory Usage: Singly linked lists occupy less memory compared to doubly linked lists. Each node in a
singly linked list has two fields: a data field to store the actual value and a next link field to reference
the next node in the list.
3. Accessing Elements: Accessing elements in a singly linked list is less efficient compared to a doubly
linked list. Since only forward traversal is possible, to access a specific element, you need to traverse
from the head until you reach the desired position.
4. Time Complexity: The time complexity of inserting or deleting a node at a given position (if the pointer
to that position is given) in a singly linked list is O(n). This is because you need to traverse the list until
you reach the desired position.
5. Preferred Use Cases: Singly linked lists are preferred when there are memory limitations, meaning you
have limited memory available for storing the list. They are also preferred when searching is not a
common operation that needs to be performed frequently.

Doubly Linked List:


1. Traversal: In a doubly linked list, traversal can be done in both forward and backward directions. Each
node contains references to both the next node and the previous node. This bidirectional linking allows
efficient traversal in either direction.
2. Memory Usage: Doubly linked lists occupy more memory compared to singly linked lists. Each node in
a doubly linked list has three fields: a data field to store the value, a next link field to reference the next
node, and a previous link field to reference the previous node.
3. Accessing Elements: Accessing elements in a doubly linked list is more efficient compared to a singly
linked list. Since both forward and backward traversal are possible, accessing elements from either
direction is more convenient and can be done in constant time.
4. Time Complexity: The time complexity of inserting or deleting a node at a given position (if the pointer
to that position is given) in a doubly linked list is O(1). This is because you can directly update the links
of the adjacent nodes without traversing the entire list.
5. Preferred Use Cases: Doubly linked lists are preferred when there are no memory limitations and when
searching is a common operation that needs to be performed frequently. The ability to traverse in both
directions makes it easier to search and manipulate the list efficiently.

Q7.
Assume that a linked list is declared as follows:

class LList: class Node: def


__init__(self, data, next=None):
self.data = data
self.next = next
def
__init__(self):
self.front = None
self.back = None

Develop a two cloning function:

def clone_range_index_reverse(self, myLList, i , j):


def clone_range (self, myLList, min , max):

that gets a linked list myLList and builds the linked list based on the given myLList by:

clone_range_index_reverse: copying over the ith, (i+1)th, (i+2)th , …, jth element of


myLList into the new list in reverse (starting at index ith). If i=len(myLList), and j=1 the new linked list
will be identical to myLList.

clone_range : copying over all elements of myLlist that are larger or equal to min and smaller or equal to
max

Hint: You can first implement a simple insert function for your linked list that gets a node (or data) and insets it
in the list. Then in the clone_range function, iterate the given myLList and one by one insert the qualified data
into.
Solution:

def clone_range_index_reverse(self, myLList, i , j):


tmp_lst = myLList()
curr_value = myLList.front
data_lst = []

# Traverse to the ith element


for data in range(i):
if curr_value is None:
break
curr_value = curr_value.next

# Traverse from ith to jth element and store data in reverse order
for data in range(i, j+1):
if curr_value is None:
break
data_lst.insert(0, curr_value.data)
curr_value = curr_value.next

# Insert the reversed data into the new list


for data in data_lst:
tmp_lst.insert(data)

return tmp_lst

def clone_range (self, myLList, min , max):


tmp_lst = myLList()

curr_value = myLList.front

# Traverse the given list and insert qualified data into the new list

while curr_value is not None:

if minimum <= curr_value.data <= maximum:

tmp_lst.insert(curr_value.data)

curr_value = curr_value.next

return tmp_lst

Section 4 - Tables (20 marks)

Q8. Implementing a double hashed table

Consider the following double hash function:


hash1(key) = key % 13 hash2(key)
= 9 - (key % 9)
Assume that the table of length 11 is already partially populated as
Index Key Value
0 11 Apple
1
2 13 Banana
3 25 Cherry
4
5
6 22 Date
7
8
9 39 Elderberry
10 21 Fig

Using the double hash functions above, do the following two insertions
Insert(Key = 0, value = Plum)
Insert(Key = 9, value = Strawberry)
Just provide a simple explanation and populate the values in the table above. No coding required.
Solution:
1. Insertion with Key = 0, Value = Plum:
 Using the first hash function: hash1(0) = 0 % 13 = 0.
 Since index 0 is already occupied by the key 11, we need to handle the collision.
 Applying the second hash function: hash2(0) = 9 - (0 % 9) = 9.
 We encounter a collision at index 9.
 Following the double hashing approach, we increment the index by the result of the second
hash function.
 We get index 10, which is also not available.
 Now we increment it by1 until we find an empty index, the next empty index is 1.
 Therefore, we insert the Key = 0 and Value = Plum at index 1 in the table.
Updated table

Index Key Value


0 11 Apple
1 0 Plum
2 13 Banana
3 25 Cherry
4
5
6 22 Date
7
8
9 39 Elderberry
10 21 Fig
2. Insertion with Key = 9, Value = Strawberry:
 Using the first hash function: hash1(9) = 9 % 13 = 9.
 Since index 9 is already occupied by the key 39, we need to apply second function.
 Applying the second hash function: hash2(9) = 9 - (9 % 9) = 9 - 0 = 9.
 We encounter a collision at index 9 again. Now we will increment the index until we find an
empty index.
 Now, we get index 4, which is available.
 Therefore, we insert the Key = 9 and Value = Strawberry at index 4 in the table.
Updated Table:

Index Key Value


0 11 Apple
1 0 Plum
2 13 Banana
3 25 Cherry
4 9 Strawberry
5
6 22 Date
7
8
9 39 Elderberry
10 21 Fig

You might also like