thumb_up 0 thumb_down 0 flag 0

4 26 22 0

thumb_up 0 thumb_down 0 flag 0
import java.util.Scanner;
class Node{
	int data;
	Node left,right;
	Node(int d){
		data=d;
		left=right=null;
	}
}
class GfG{
	public static int max=0;
	public static void main(String[] args){
		Node root=new Node(5);
		root.left=new Node(7);
		root.left.left=new Node(3);
		root.left.right=new Node(8);
		root.right=new Node(2);
         /*
              5
             / \
            7   2
           / \  
          3   8
          The Binary tree formed.
         */
		maxSum(root,root.data);
		System.out.println(max);
	}
	public static void maxSum(Node root,int sum){
		if(root==null)
			return;
		try{
		    maxSum(root.left,sum+root.left.data);
		}
		catch(Exception e)
		{maxSum(root.left,sum);}
		if(sum>max){
			max=sum;
		}
		try{
		maxSum(root.right,sum+root.right.data);
		}
		catch(Exception e)
		{maxSum(root.right,sum);}
	}
}

The above Code prints the maximum sum of the Binary Tree. It simply uses the inorder traversal approach to calculate the maximum sum

Private Constructor is allowed in a Class.
It disallows access from other Classes but we can access it from same class.

For elements in an array less than 10^7. We can easily create a data Structure named Hashing.
It is easy to implement.
Lets see an example of this:
A[] = { 1 , 5 , 4 , 4 , 3 , 8};
X=8;

So Create a Hash Table.
H[] = { 0 ,1 , 0 , 1 , 2 , 1 , 0 , 0 , 1};

The Above Hash Table is created in the manner that, we have one as a element in Array occurring once so 1 at index 1 in Hashed Table, We have four as a element in Array occurring twice so 2 at index 4  and so on.
Now Copy the Hash Table to a new Array. And Array will look like.
B[]= { 1 , 3 , 4 , 4 , 5 , 8};

Now we can say that Array is sorted. And logic implementation looks like.

bool findSum(int B[], int X, int n){

    int i = 0;
    int j = n-1;//length of Array B.
    while( i < j )
    {
       if(B[i]+B[j] == X)
        {
           return true; 
        }
       else if(B[i]+B[j] > X)
        {
           j--;
        }
       else if(B[i]+B[j] < X)
        {
          i++;
        }
    }
    return false;
}

 

thumb_up 0 thumb_down 1 flag 0

There are three kind of memory allocation algorithms.

1) Best Fit- This algorithm states that find the smallest available memory slot to put the newly formed file.

2) First Fit- This algorithm states that find the first available memory slot to keep the newly formed file.

3) Worst Fit- This algorithm states that find the largest available memory slot to keep the newly formed file.

thumb_up 8 thumb_down 0 flag 0

Best way to implement this is by Hashing. Or we can say by using HashSet (in java) or Set (in cpp).

thumb_up 3 thumb_down 0 flag 0

The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms.

Constant time:

An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input.

Logarithmic time:

An algorithm is said to take logarithmic time if T(n) = O(log n). Due to the use of the binary numeral system by computers, the logarithm is frequently base 2 (that is, log2 n, sometimes written lg n). However, by the change of base for logarithms, loga n and logb n differ only by a constant multiplier, which in big-O notation is discarded; thus O(log n) is the standard notation for logarithmic time algorithms regardless of the base of the logarithm.

Polylogarithmic time:

An algorithm is said to run in polylogarithmic time if T(n) = O((log n)k), for some constant k. For example, matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine.

Sub-linear time:

An algorithm is said to run in sub-linear time (often spelled sublinear time) if T(n) = o(n). In particular this includes algorithms with the time complexities defined above, as well as others such as the O(n½) Grover's search algorithm.

Linear time:

An algorithm is said to take linear time, or O(n) time, if its time complexity is O(n). Informally, this means that for large enough input sizes the running time increases linearly with the size of the input. For example, a procedure that adds up all elements of a list requires time proportional to the length of the list. This description is slightly inaccurate, since the running time can significantly deviate from a precise proportionality, especially for small values of n.

Quasilinear time:

An algorithm is said to run in quasilinear time if T(n) = O(n logk n) for some positive constant k; linearithmic time is the case k = 1.[6] Using soft-O notation these algorithms are Õ(n). Quasilinear time algorithms are also O(n1+ε) for every ε > 0, and thus run faster than any polynomial in n with exponent strictly greater than 1.

Polynomial time:

An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, i.e., T(n) = O(nk) for some constant k.

src: wikipedia

thumb_up 0 thumb_down 0 flag 0

Best way to do the problem above is, by using Binary Search in O(logN) time complexity.

Reference:   http://www.geeksforgeeks.org/count-number-of-occurrences-in-a-sorted-array/

thumb_up 4 thumb_down 0 flag 0

Destructors are the functions similar to the class name with a tilt sign (~) in before the name. It used to free the pointer created by constructor of the base class. Non-virtual destructor results in undefined  behaviour, to overcome this situation we use virtual destructor. for the derived class.

For example:

// A program without virtual destructor causing undefined behavior

#include<iostream>
using namespace std;
class base {
  public:
    base()    
    { cout<<"Constructing base \n"; }
    ~base()
    { cout<<"Destructing base \n"; }    
};
class derived: public base {
  public:
    derived()    
    { cout<<"Constructing derived \n"; }
    ~derived()
    { cout<<"Destructing derived \n"; }
};
int main(void)
{
  derived *d = new derived(); 
  base *b = d;
  delete b;
  getchar();
  return 0;
}


Output for this code:
Constructing base
Constructing derived
Destructing base

If we use virtual destructor:

// A program with virtual destructor
#include<iostream>
using namespace std;
class base {
  public:
    base()    
    { cout<<"Constructing base \n"; }
    virtual ~base()
    { cout<<"Destructing base \n"; }    
};
class derived: public base {
  public:
    derived()    
    { cout<<"Constructing derived \n"; }
    ~derived()
    { cout<<"Destructing derived \n"; }
};
int main(void)
{
  derived *d = new derived(); 
  base *b = d;
  delete b;
  getchar();
  return 0;
}

Output:
Constructing base
Constructing derived
Destructing derived
Destructing base

thumb_up 0 thumb_down 0 flag 0

ACID is a set of properties of database transactions.In the context of databases, a sequence of database operations that satisfies the ACID properties and, thus, can be perceived as single logical operation on the data is called a transaction.
For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction.

Atomicity:
Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, then the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible ("atomic"), and an aborted transaction does not happen.

Consistency:
The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code), but merely that any programming errors cannot result in the violation of any defined rules.

Isolation:
The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially, i.e., one after the other. Providing isolation is the main goal of concurrency control. Depending on the concurrency control method (i.e., if it uses strict - as opposed to relaxed - serializability), the effects of an incomplete transaction might not even be visible to another transaction.

Durability:
The durability property ensures that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.

Source: wikipedia