Maximizing XOR

Given two integers, L and R, find the maximal value of A xor B, where A and B satisfy the following condition:


Input Format

The input contains two lines; L is present in the first line and R in the second line.


Output Format

The maximal value as mentioned in the problem statement.

Sample Input


Sample Output



The input tells us that L=10 and R=15. All the pairs which comply to above condition are the following:
Here two pairs (10, 13) and (11, 12) have maximum xor value 7, and this is the answer.

class Maximizing_XOR
static void Main(string[] args)
int res;
int _l;
_l = Convert.ToInt32(Console.ReadLine());

int _r;
_r = Convert.ToInt32(Console.ReadLine());

res = maxXor(_l, _r);

private static int maxXor(int _l, int _r)
int _max = 0, temp = 0;
for (int i = _l; i <= _r; ++i)
for (int j = i; j <= _r; ++j)
temp = i ^ j;
if (_max < temp)
_max = temp;
return _max;

Problem source:

NULL Pointer in C

A NULL pointer is defined as the special pointer value that points to nowhere in the memory. If it is too early in the code to assign a value to the pointer, then it is better to assign NULL (i.e., or 0) to the pointer.

For example:


int *p=NULL;

Here, the pointer variable p is a NULL pointer, this indicates that the pointer variable p does not point to any part of the memory. The value for NULL is defined in the header file “stdio.h”. Instead of using NULL, we can also use ” or 0. The programmer can access the data using the pointer variable p if and only if it does not contain NULL. The error condition can be checked using the following statement:

printf(“p does not point to any memory\n”);
printf(“Access the value of p\n”);

Note : A pointer variable must be initialized. If it is too early to initialize a pointer variable, then it is better to initialize all pointer variables to NULL in the beginning of the code. This avoids unintentional use of an un-initialized pointer variable.

Example :  Consider the following statements:
int *x;
int y;
x=y;   /* Error*/

Note :  The value of data variable can not be assigned to a pointer variable. So, the statement x=y; result in an error . The correct statement is x=&y;

Binary Search in C

To overcome the disadvantage of linear search, we generally use binary search when the elements are sorted. Now, let us see ” What is binary search? What is the concept used in binary search?”

Definition : Binary search is a simple searching technique which can be applied if the items to be compared are either in ascending order or descending order.

The general idea used in binary search is similar to the way we search for the meaning of a word in dictionary. Obviously , we do do not use linear search. Instead , we open the dictionary in the middle and the word is compared with the element at the middle of the dictionary. If the word is found, the corresponding meaning is retrieved and the searching is finished. Otherwise , we search either the left part of the dictionary or the right part of the dictionary. If the word to be searched is less than the middle element, search towards left otherwise , search towards right. The procedure is repeated till key(word) is found or the key item (word) is not found.

Once we know the concept of binary search, the next question is “How to search for key in a list of elements?”

Now, let use see how to search for an item. The procedure is shown below:

Procedure: The program can be designed as follows. Here, low is considered as the position of the first element and high as the position of the last element. The position of the middle element can be obtained using the statement: mid=(low+high)/2;
the key to be achieved using the statement:
printf(“Sucessful search”);
After executing the statement , if(key==a[mid]), if it is true, the message “successful search” is displayed. If this condition is false, key may be in the left part of the array or in the right part of the array . If the key is less than the middle element, the array in the left part has to be compared from low to mid-1. Otherwise, the right part of the array has to be compared from mid-1 to high.
This can be achieved by using the following statement:
Note that, as all the above statement are executed repeatedly, the gap between low and high will be reduced. Finally, the value of low may exceed the value of high, in that case we say that key is not found.
Now, the C program takes the following form:

printf(“Sucessful search”);
printf(“UN-Successful search”);

You are already familiar with the algorithm and flowchart of binary search. Now the complete C program is shown below:

void main()
int i,n,key,a[10],low,mid,high;
printf(“Enter the value of n:\n”);
printf(“Enter n values\n”);
printf(“Enter Key to search”);
/* Initialization*/
/* Find the mid point */
/* Key found */
 Binary Search in c Language

Note: The necessary condition for binary search is that the list of elements should be sorted. Otherwise, binary search should not be used. If elements are not sorted and are very less, normally we go for linear search. But, if the elements are more and not sorted, it is better to sort them and use binary search.

Advantages of binary search

  • Simple Technique
  • Very efficient searching technique.
  • The list of elements where searching takes place should be sorted.
  • It is necessary to obtained the middle element which is possible only if the elements are sorted in the array. If the elements are sorted in linked list , this method can not be used.



Structure of a C program

The structure of a C program is nothing but the way of framing the group of statements while writing a C program. We put the general structure of a C program first as shown below:

[preprocessor directives]
[global declarations]

returning_type main( )
[Declaration Section (local)]

[Executable Section]

[User defined functions]

Note : The bold-faced characters such as main( ) in one line along with the left parenthesis ‘(‘ and the right parenthesis ‘)’ should be typed as they are. The Declaration section and Executable section enclosed within ‘{‘ and ‘}’ is called the body of the main function. The data or value returned by the function main( ) is indicated by’ returning_type’. Normally main( ) doesn’t return any value. So, the returning type void is used. The function main( ) can be preceded by global declarations and preprocessor directives.

Preprocessor directives : The preprocessor statements starts with ‘#’ symbol. These statements instruct the compiler to include the specified file in the beginning of the program. One important point about preprocessor directives is that these statements are never terminated by ‘;’ for example,

#include<stdio.h>  /* Only one file is permitted for one #include */

are the files that the compiler includes into the user program. These two files “stdio.h” and “math.h” are called header files and hence the extension ‘.h’ . Using the preprocessor directive the user can define the constant. For example,
#define SIZE 100 /*Only one constant can be defined using one #define */
#define N 50
Here , Size and N are called symbolic constants. Their value is not changed during program execution.

Global declarations: The variables that are declared before all the functions are called global variables. All the functions can access these variables. By default the global variables are initialized with ‘0’.
main( ) : Each and every C program should have a function main( ) and there should be one and only one function by name ‘main( )’ . This function is always executable first. The function main( ) may call other functions. The statements enclosed within left and right curly braces are called body of the function main( ).

Declaration section : The local variables that are to be used in the function main( ) should be declared in the declaration section. The variables declared are identified as identifiers by the C compiler. The variables can also be initialized. For example, consider the declarations shown below:

int sum=0 ; /* The declared variables is initialized to 0 */
int a;           /* The declared variable contains garbage(unwanted) value */
float b,c,d;  /* More than one variables can be declared by single statements */

Executable section : This section contains the building blocks of the program. The blocks containing executable statements represent the directions given to the processor to perform the task. A statements may represent an expression to be evaluated, input/output operation , assignment operation, etc. The statements may be even control statements such as if statements , for statement , while statement,do-while statement,etc. Each executable statement of a block should end with a ‘;’ .The symbol ‘;’ is also called as statement terminated or separator.

User defined functions: This is the last optional section of the program structure. The functions written by the user to perform particular or special task are called defined functions. These are called as sub-programs. the basic structure of the user defined function resembles that of the function main( ).



Different types of sorting algorithms.


  1. Comparable elements
  2. Bubblesort
    1. compare 1st and 2nd elements
    2. if 1st larger than 2nd, swap
    3. compare 2nd and 3rd, and swap if necessary
    4. continue until compare the last two elements
    5. the largest element is now the last element in the array.
    6. repeat statring from the beginning until no swaps are performed (i.e.,
      the array is sorted)
    7. each time you go through the elements bubbling up the largest element
    8. no need to try the last i elements for the ith run since the end
      elements are already sorted
  3. Selection Sort
    1. array to be sorted: A
    2. array to be returned: B
    3. find smallest element in A and put in B
    4. mark space in A with null so it won’t be chosen again
    5. repeat last two steps until B is sorted array
  4. Insertion Sort
    1. algorithm
      • passes through each element
      • everything before element is sorted
      • puts element in appropriate place in sorted half of array by checking
        each element starting from the back of the sorted part of the array
    2. Code Methods: insertionsort
    3. Worst Case O(N2) – when array is reverse sorted
    4. Best Case O(N) – when array is already sorted
    5. Swap Number
      • = number of inversions (i.e., i < j but a[i] > a[j])
      • Running Time = O(I + N)
      • Notice that for a sorted array I = 0
      • For a reverse array I = O(N2)
      • Average Number of inversions in an array of N distinct elements is N *
        (N – 1 ) / 4 = O(N2)
    6. Average Case O(n2)
    7. Any algorithm that sorts by exchanging adjacent elements requires
      O(N2 time on average.

      • Including Bubblesort and Selection Sort
  5. Shellsort
    1. invented by Donald Shell
    2. Algorithm
      1. start separation = number of elements / 2
      2. compare a[i] and a[i+separation]
      3. swap if necessary
      4. do for each element of array i = 0 to a.length – separation
      5. set separation = separation /2
      6. recompare all the elements
    3. separation calculation
      1. shell suggested /2
      2. other sequences give great improvement
      3. note figure 7.3 does not / 2 properly
    4. Code Methods: shellsort
    5. Average Case Shellsort : Open Problem
    6. Worst Case O(N2)
      • 1 9 2 10 3 11 4 12 5 13 6 14 7 15 8 16
      • subsequant increments don’t do anything because not relatively prime
      • last row is insertion sort of N(N-1)/2 comparisons
      • second to last row is (N)(N/2 – 1)/2 comparisons and so on
      • toss out the minus ones (we are rounding up – so will only increase
        running time)
      • NN/2 + NN/2*2 + NN/4*2
      • factor out NN/2 => NN/2 * ( 1 + 1/2 + 1/4 + ..)
      • series is less than 2, so get NN/2 * (2) = NN => O(N2)
      • OR …
      • each pass with increment hk has hk insertion
        sorts of N/hk elements
      • insertion sort is O(N)
      • each pass is thus of O(hk * (N/hk)2)
      • or O(N2/hk)
      • hk changes for each pass
      • hk in it’s worst case is 1, 2, 4, 8, …
      • all other cases will divide down smaller by throwing out remainder
      • so sum of 1/hk = 1 + 1/2 + 1/4 + 1/8 + .. < 2
      • Algorithm so we have 2N2 = O(N2)
    7. Hibbard’s increments
      1. Use relatively prime gaps
      2. 1, 3, 7, 2k – 1
      3. Worst Case: O(N3/2)
      4. I will spare you the proof.
      5. Average Case: O(N5/4) – not proved yet
    8. Sedgewick increments
      1. 1, 5, 19, 41, 109
      2. terms are either 9 * 4i – 9 * 2i + 1 or
        4i – 3 * 2i + 1
      3. Worst Case: O(N5/4)
      4. Average Case: O(N7/6) – not proved
  6. Heapsort
    1. Worst Case: O(NlogN)
    2. Slower in practice than Sedgewick’s
    3. Code Methods: leftChild, percDown, heapsort
    4. algorithm:
      1. put elements in array
      2. build heap O(N)
      3. deleteMin N times and place back in array – O(N*logN)
    5. uses two arrays
      1. too much space for large arrays
      2. could use one, by placing deleteMins in back of array
      3. Will then be in reverse order
      4. could instead delete max
      5. and place max in back of array when delete it
      6. then use one array
      7. similar to selection sort
    6. array for heapsort starts in position 0 as opposed to a heap where it
      starts in position 1
    7. Analysis
      1. Building Heap: 2N comparisons
      2. deleteMin logN each (N of them)
      3. O(NlogN)
      4. Performance is Extremely consistent
      5. On average uses only slightly fewer than worst bound
      6. To determine average case is difficult because successive deleteMax
        destroy heap’s randomness
      7. Average number of comparisons: 2NlogN – O(NloglogN)
      8. Notice that this is still O(NlogN)
      9. Best Case: NlogN – O(N)
      10. Notice that this is still O(NlogN)
  7. Mergesort
    1. Code Methods: mergesort (2), merge
    2. Worst Case: O(NlogN)
    3. Recursively merges two sorted lists
    4. Time to merge two sorted lists is linear (N-1 comparisons)
    5. 1 13 24 26 merge 2 15 27 38 gives 1 2 13 15 24 26 27 38
    6. Classic Divide and Conquer Strategy
    7. If more than one element, divide and merge sort the first and second
    8. Analysis
      1. Recurrance Relation
      2. T(1) = 1
      3. T(N) = 2T(N/2) + N
      4. T(N)/ N = T(N/2)/N/2 + 1
      5. T(N/2) / N/2 = T(N/4) / N/4 + 1
      6. T(2) / 2 = T(1)/1 + 1
      7. Sum up all of the equations
      8. T(N)/N = T(1)/1 + logN <- this logN is the sum of the 1′s
      9. T(N) = NlogN + N = O(NlogN)
    9. Uses extra memory to merge and copy back into array
    10. Merging is cornerstone of external sorting routines
    11. Not often used for main memory sorts
    12. Can also do it not recursively
    13. Or can use less memory – much more complex algorithm (impractical)
  8. Quicksort
    1. Code Methods: quicksort (2), median3
    2. Fastest known sorting algorithm in practice
    3. Worst Case: O(N2)
    4. Average Case: O(NlogN)
    5. Worst Case can be made very unlikely with little effort
    6. Divide and Conquer Algorithm
    7. Algorithm
      1. If the number of elements in S is 0 or 1, return
      2. pick any element in v in S. This is called the pivot.
      3. Partition the elements in S into two groups, those below the pivot
        (numerically) and those above the pivot.
      4. return quicksort of s1 followed by v followed by quicksort of s2
    8. Hope that half keys are greater than the pivot, other are less
    9. Subproblems are not guaranteed to be of equal size which is potentially
    10. Faster than mergesort because partitioning step can be performed in
      place and efficiently
    11. Picking the Pivot
      1. Pick first element
        1. Awful performance if list is sorted or reverse sorted
        2. All elements will go in S1 or S2
        3. Will take quadratic time and do nothing at all
        4. Presorted input or mostly sorted input is quite frequent
        5. Horrible pivot point
      2. Pick larger of the first two elements
        1. bad for the same reasons as above
      3. Pick pivot randomly
        1. Good solution
        2. Unless random number generator has a flaw (which is common)
        3. Random number generation expensive
      4. Median of Three
        1. Best choice would be the median of the array
        2. Would take too long
        3. Take median of first last and middle
        4. elimnates sorted case
        5. reduces running time of quicksort by about 5%
    12. Partitioning Strategy
      1. Swap pivot into last element
      2. Assuming all distinct elements
      3. while i less than j
      4. Move i to right while elements less than pivot
      5. Move j to left while elements greater than pivot
      6. When i and j have stopped – swap elements and continue
      7. Once i and j have crossed, stop
      8. i is at lowest element greater than pivot
      9. swap i and pivot
      10. Duplicates to pivot
        1. both i and j go past the pivot
        2. Otherwise, if only i does, will put all duplicate in S1
        3. if both go past and all the same element – could create very uneven
        4. giving O(N2) results
        5. best to make both stop and swap, since the extra swaps is better
          than uneven subarrays
        6. Why would we sort identical elements?
        7. Think of sorting 100,000 elements where 5000 are identical
        8. Eventually will call quicksort on 5000 elements (since quicksort is
        9. Will want these to be sorted efficiently
    13. Small Arrays
      1. N <= 20
      2. Insertion sort is faster
      3. Use insertion sort inside of quicksort for arrays smaller than 20
      4. Faster by 15% in running time
      5. Good cutoff is N = 10, although any number between 5 and 20 is fine
      6. Saves nasty cases – such as taking median of three elements when only
    14. Routines
      1. Driver calls quicksort method that specifies the start and stop of
        array to be sorted
      2. Median of Three Pivot selection
        1. returns pivot
        2. places smallest of left, median, and right to left
        3. places largest to the right
        4. places median in the middle then
        5. switched pivot to last element -1
      3. Heart of Quicksort
        1. Calls insertion sort on small arrays
        2. Skips outer elements since done in median3 with the ++ and –
        3. resores pivot into center from right – 1
        4. Need to increase i and j each loop (i.e., not inside while) because
          otherwise i and j will never change and will always equal pivot
    15. Analysis
      1. Worst Case: O(N2)
      2. Best Case: O(NlogN) using same proof as merge sort
      3. Average Case: O(NlogN)
  9. Selection Problem
    1. Quickselect : Code
    2. |s| is the number of elements in s
    3. Algorithm
      1. if |s| is 1 then k = 1, return element
      2. if |S| <= cutoff (i.e., 10), then sort S and return kth largest
      3. pick pivot
      4. partition into S1 and S2
      5. if k <= s1, then kth smallest is in S1, return quicksort(s1, k)
      6. if k = 1 + s1, then return pivot
      7. otherwise quickselect(s2, k – |s1| – 1)
    4. makes only one recursive call instead of two
    5. Worst Case: O(N2) – when S1 or S2 is empty (not saving
      recursive call)
    6. Average Case: O(N)
    7. Array starts at zero, so kth smallest element in k-1 position
    8. If use median of three – then worst case condition almost impossible
    9. Does destroy initial ordering
    10. If not desirable – a copy need be made
  10. Decision Trees and Lower Bounds
    1. Any sorting algorithm that uses comparisons requires at least O(NlogN)
      in worst time
    2. merge sort and heap sort optimal within constant factor
    3. Decision Trees can represent all possible options, and can prove the
      number of comparisons must be NlogN
    4. Can only draw small decision trees (3 elements) will get too large
    5. Let T be a binary tree of depth d, then T has at most 2d

      • Induction
      • Basis: If d = 0, then there is at most one leaf
      • Otherwise, we have a root, which cannot be a leaf, and a right and
        left subtree, each of depth at most d-1. By the induction hypothesis, they
        can have at most 2d-1 leaves giving a total of at most
        2d leaves.
    6. log(n!) = Log(NlogN)
  11. Bucket Sort
    1. array of the size of maximum element + 1
    2. counts the number of elements of each
    3. useful where input is only small integers
  12. External Sorting
    1. Too large for main memory – skipping section
    2. Multiway Merge
    3. Polyphase Merge
    4. Replacement Selection

Program to Insert a substring in a main string.

/* We have a main string and a sub string that has to be inserted somewhere in main string according to user. User insert main string and substring both, as well as he entered index where substring has to be inserted. If we have space in main string and we want to keep the main string in proper format as we insert than we have to add space along with substring as input. */


void insert(char sup[], char sub[], int pos,int supl, int subl)
{int i,j,n;
char temp[1000];
printf(“\nsupl= %d\n”,supl);
printf(“\nsubl= %d\n”,subl);



for(;pos=supl || pos<0)
{puts(“\nArray out of bound! You entered invalid index.”);
else insert(sup,sub,pos,supl,subl);

Program to Insert a substring in a main string.

/* We have a main string and a sub string that has to be inserted somewhere in main string according to user. User insert main string and substring both, as well as he entered index where substring has to be inserted. If we have space in main string and we want to keep the main string in proper format as we insert than we have to add space along with substring as input. */


void insert(char sup[], char sub[], int pos,int supl, int subl)
{int i,j,n;
char temp[1000];
printf(“\nsupl= %d\n”,supl);
printf(“\nsubl= %d\n”,subl);



for(;pos=supl || pos<0)
{puts(“\nArray out of bound! You entered invalid index.”);
else insert(sup,sub,pos,supl,subl);