Selection Sort
* It is a combination of searching and sorting. It is among the simplest sorting techniques and it will work very well for small files. The disadvantage of it is, this kind of sorting technique is inefficient on large lists.
Here is a sample code for this sorting technique:
void selectionSort(int numbers[], int array_size)
{
int i, j;
int min, temp;
for (i = 0; i < array_size-1; i++)
{
min = i;
for (j = i+1; j < array_size; j++)
{
if (numbers[j] < numbers[min])
min = j;
}
temp = numbers[i];
numbers[i] = numbers[min];
numbers[min] = temp;
}
}
* This is how it works:
- First, find the smallest element in the array.
- If the smallest element of the array was found, exchange it with the element in first position in the array.
- Find the second smallest element in the array.
- Exchange the second smallest element in the array with the element in the second position in the array.
- Continue this way until the entire array is sorted.
* For me, Selection sort is not difficult to analyze because it is one of the simplest sorting techniques that we are using.
* Here is an example of a Selection Sort: array A = (5,2,1,4,7,6,3)
unsorted = 5 2 1 4 7 6 3
- Scan for the smallest element in the array which is 1. Exchange it with the first element in the array.
1 2 5 4 7 6 3
- Scan for the next smallest element in the array which is two and exchange it with the second position. Since element 2 is already in its proper position, scan again for the next smaller element and exchange it in the third position.
1 2 3 4 7 6 5
- Since elements 1 2 3 4 is already in their proper position, scan for another smaller element and exchange it with the 5th position in the array.
1 2 3 4 5 6 7
- And now, we have the sorted array using the Selection Sort.
Insertion Sort
* This kind of sorting technique is quite similar to Selection sort. The difference of the two is, the Selection sort scan all the remaining elements to find the smallest element. While in Insertion sort, it only scans as many elements as needed to determine the correct location of the unsorted element of the array.
* Here is a sample code for Insertion Sort:
void insertionSort(int numbers[], int array_size)
{
int i, j, index;
for (i = 1; i < array_size; i++)
{
index = numbers[i];
j = i;
while ((j > 0) && (numbers[j − 1] > index))
{
numbers[j] = numbers[j − 1];
j = j − 1;
}
numbers[j] = index;
}
}
* This is how it works:
- First we have an unsorted array. It scans all the elements first and looks for the necessary element to be positioned to the correct location.
- If the first few elements of the array are already sorted, an unsorted element can now be inserted in the sorted elements and place the unsorted element in its proper place.
- Continue this way until all the elements in the array is in their proper position.
* Here is an example of an Insertion sort:
array A = (5,2,4,6,1,3)
5 2 4 6 1 3
- Since from the first few elements, 2 is the smallest element, this is what will happen:
2 5 4 6 1 3
- Since 4 is less than 5,
2 4 5 6 1 3
- Since elements 2 4 5 6 are already sorted, look for the unsorted element after the last sorted element. The next unsorted element is 1. Position it to its proper place. This is what will happen:
1 2 4 5 6 3
- Since element 3 is the last unsorted element, place it in its proper position. This is:
1 2 3 4 5 6
- And we have now the sorted array using the Insertion Sort.
Bubble Sort
* It is a sorting technique that works by comparing each adjacent pair of elements in an array and swapping the elements if necessary and repeating the pass until all the elements in the array are sorted.
* Here is a sample code for Bubble Sorting:
// This is a sample code of Bubble Sorting for Descending Order
void BubbleSort(apvector <int> &num)
{
int i, j, flag = 1; // set flag to 1 to start first pass
int temp; // holding variable
int numLength = num.length( );
for(i = 1; (i <= numLength) && flag; i++)
{
flag = 0;
for (j=0; j < (numLength -1); j++)
{
if (num[j+1] > num[j]) // ascending order simply changes to <
{
temp = num[j]; // swap elements
num[j] = num[j+1];
num[j+1] = temp;
flag = 1; // indicates that a swap occurred.
}
}
}
* This is how it works:
- First it compares the first two elements in the array.
- If the first element is greater than the second element, swap the first and second element. But if the first element is less than the second, elements will remain in their position.
- Next, compare the second element and third element.
- If the second element is greater than the third element, swap the second and third element. But if the second element is less than the third, elements will remain in their position.
- Continue this process until the elements in the array are sorted.
* Here is a sample of Bubble Sorting: we have array A = (2 1 5 3 4 6)
[2 1 5 3 4 6] → [1 2 5 3 4 6] : we swap 1 and 2 because 1<2.
- Now compare the second element and the third element. Since 5 > 2, they will remain in their position.
- Next, compare the third and fourth element. Since 3 < 5, swap them.
[1 2 5 3 4 6] → [1 2 3 5 4 6]
- We have now sorted the first three elements in the array. There were still remaining unsorted elements.
- Now compare the 4th and 5th element. Since 4 < 5, swap them.
[1 2 3 5 4 6] → [1 2 3 4 5 6]
- Now compare the last two elements in the array. Since they’re already in their proper position, Finally, we the sorted array and the algorithm can terminate.
Shell Sort
* This Sorting technique was named after its inventor D. L. Shell.
* This sorting technique is similar of Insertion sorting but the difference is, instead of comparing adjacent elements, this sorting technique repeatedly compares elements that are distance away from each other where ( d represents the distance ). The value of d starts out as half the input size and is halved after each pass through the array. The elements are compared and swapped when needed.
* Equation of the distance: d = ( n + 1) / 2
* Here is a sample cod for Shell sorting:
void shellSort(int numbers[], int array_size)
{
int i, j, increment, temp;
increment = 3;
while (increment > 0)
{
for (i=0; i < array_size; i++)
{
j = i;
temp = numbers[i];
while ((j >= increment) && (numbers[j-increment] > temp))
{
numbers[j] = numbers[j - increment];
j = j - increment;
}
numbers[j] = temp;
}
if (increment/2 != 0)
increment = increment/2;
else if (increment == 1)
increment = 0;
else
increment = 1;
}
}
* This is how it works: We have array a = (91 94 86 76 69 84)
- First we have to equate the distance. Since we have 6 elements in the array,
d = (6 +1)/2 = 3 : Compare the 1st and 4th, 2nd and 5th, 3rd and 6th items (since they are 3 positions away from each other)
[ 76 69 84 91 94 86 ]
- The last resulting distance is 3. So, we will equate another distance.
d = (3 + 1)/2 = 2 : The new distance now is two. So, we will compare the first element with the next element which is 2 distance away from the first element.
[ 76 69 84 91 94 86 ]
- We will equate another distance.
d = (2 + 1)/2 = 1 : This is our new distance. Compare again each element until all the elements in the array are sorted.
[76 69 84 91 94 86] → [69 76 84 91 86 94] → [69 76 84 86 91 94]
- And now, we have the sorted array using Shell Sorting.
CS 220
Thursday, February 10, 2011
Thursday, February 3, 2011
Assignment 2
Empirical analysis
* It is a kind of analysis which denotes information gained by means of observation, experience or experiment.
It is easy to understand this kind of analysis. From the definition itself, it is more on experiments, observations and experiences. Easy to understand but there’s a lot of things to do like conducting experiments, observing things and learning from experiences.
ANALYSIS of ALGORITHM
* Algorithms are designed to work with inputs. The efficiency or the running time of an algorithm is stated as a function relating the input length to the number of steps or storage locations.
Analysis of algorithm determines the amount of resources such as time and storage which are necessary to execute a program. It provides theoretical estimation for the resources needed by any algorithm solves a given computational problem and it is common to estimate their complexity. If you are given such an expression of an algorithm, we run the program and observe its behavior. But we must be careful in making conclusions based upon the performance of the program. In analyzing an algorithm, we will determine the following:
But we are primarily concerned with the running time of a program. We have to consider the memory space needed to execute a program. There were many factors that will affect the running time. These are the algorithm itself, the input data, and the computer system used to run the program. The performance of a computer is determined by the hardware: the processor (type and speed), memory available and disk available, the programming language used to execute the program, the language compiler and interpreter, and the computer operating system software used.
Big O notation
* It is commonly used notation that seeks to describe the relative complexity of an algorithm, that is, for estimating the rate of function growth as the key factors tends towards infinity. Given two positive-valued functions f and g, consider the following definition: f(n) is O( g(n) ) if there exist positive numbers c and N such that f(n) is less than or equal to cg(n) for all n ≥ N.
The problem with this definition is that, it states only that there must exist certain c and N but it does not give any hint on how to compute or calculate the values these constants. Second, it does not put any limitations on these values and gives little guidance in situations when there are many candidate values. For example, for f (n) = 2n² + 3n +1 = O(n²) where g(n) = n².
sources:
http://en.wikipedia.org/wiki/Empirical
http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-one
* It is a kind of analysis which denotes information gained by means of observation, experience or experiment.
It is easy to understand this kind of analysis. From the definition itself, it is more on experiments, observations and experiences. Easy to understand but there’s a lot of things to do like conducting experiments, observing things and learning from experiences.
ANALYSIS of ALGORITHM
* Algorithms are designed to work with inputs. The efficiency or the running time of an algorithm is stated as a function relating the input length to the number of steps or storage locations.
Analysis of algorithm determines the amount of resources such as time and storage which are necessary to execute a program. It provides theoretical estimation for the resources needed by any algorithm solves a given computational problem and it is common to estimate their complexity. If you are given such an expression of an algorithm, we run the program and observe its behavior. But we must be careful in making conclusions based upon the performance of the program. In analyzing an algorithm, we will determine the following:
- Running time of a program as a function of its inputs.
- The total or maximum memory space needed for the data of the program.
- The total size of the program code.
- The complexity of the program which is the readability of the program and of how is it to understand.
- The robustness of the program. Of how will it deal with erroneous inputs.But we are primarily concerned with the running time of a program. We have to consider the memory space needed to execute a program. There were many factors that will affect the running time. These are the algorithm itself, the input data, and the computer system used to run the program. The performance of a computer is determined by the hardware: the processor (type and speed), memory available and disk available, the programming language used to execute the program, the language compiler and interpreter, and the computer operating system software used.
Big O notation
* It is commonly used notation that seeks to describe the relative complexity of an algorithm, that is, for estimating the rate of function growth as the key factors tends towards infinity. Given two positive-valued functions f and g, consider the following definition: f(n) is O( g(n) ) if there exist positive numbers c and N such that f(n) is less than or equal to cg(n) for all n ≥ N.
The problem with this definition is that, it states only that there must exist certain c and N but it does not give any hint on how to compute or calculate the values these constants. Second, it does not put any limitations on these values and gives little guidance in situations when there are many candidate values. For example, for f (n) = 2n² + 3n +1 = O(n²) where g(n) = n².
sources:
http://en.wikipedia.org/wiki/Empirical
http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-one
Thursday, January 13, 2011
Union-Find Algo
A Union-Find Algorithm is a kind of algorithm that performs two important operations which are: Union and Find.
* The Union operation will combine or merge the two sets into one.
* The Find operation will which set is particular the element is in and is useful of determining if the elements are on the same set.
-- Here is a link that explains a brief implementation of the Union-Find algorithm:
http://www.users.csbsju.edu/~lziegler/CS162/UnionFind2.html
Subscribe to:
Posts (Atom)