Posts

Showing posts from July, 2017

SQL Query Practice Question

Que 1:-Write a query to find the number of employee in each department. Given table is following. Emp_id Emp_name Department 1 ishant cse 2 rahul ece 3 ankit cse 4 deepak it 5 anshul it 6 anurag ece 7 aman cce 8 rohit it 9 rajesh cse 10 Kamal ece                                                                  (tbl  ) Before reaching to the exact answer of the above question we will see some basic  concepts . Now the question is what will happen if we write the following query select * from tbl group by department; result  is:- Emp_id Emp_name Department 9 rajesh cse 10 kamal ece 8 rohit it I am sure you must be wondering why it is just giving the single row rather you have applied group by on department .So the answer is of your question is in the folling question.    Suppose I have a table Tab1 with attributes - a1, a2, ... etc. None of the attributes are unique. What will be the nature of the following query? Will it return a single row al

Joins in DBMS

Explain all the Joins ? An SQL join clause combines records from two or more tables in a database. It creates a set that can be saved as a table or used as it is. A JOIN is a means for combining fields from two tables by using values common to each table.

Keys and Integrity Rules in RDBMS

Image
What is a Key and what are the different type of Keys ? Key is an attribute or a group of attribute which work as a unique identifier to identify the all the tuples uniquely. Based on the uniqueness there are three kind of keys:- 1.Candidate Key 2.Super Key 3.Primary Key Candidate Key :-When more than one or group of attributes serves as a unique identifier , they are each called as candidate key. Candidate key have two following properties:- 1. Uniqueness :- In the relation there are no two tuples with the same value for k(candidate key). Suppose we have candidate key having attribute Name and Class like (Name , Class) then two tuples cant have the same value for the Name and class . for eg if one tuple have (Ishant , CSE) as value of its attribute then the other tuple cant have both  the same value of Name and class . of course other tuple may have value like this (Ishant , ECE )  and (Rahul, CSE) but not (Ishant, CSE). 2. Irreducibility:- It states that  No proper su

Best Practices to follow to develop REST API

REST is acronym for  REpresentational State Transfer . It is architectural style for distributed hypermedia systems ans was first presented by Roy Fielding in 2000 in his famous dissertation.It is mainly used to develop lightweight, fast,scalable and easy to maintain , web services that often use HTTP as the means of communication. https://blog.mwaysolutions.com/2014/06/05/10-best-practices-for-better-restful-api/ https://www.snyxius.com/blog/21-best-practices-designing-launching-restful-api/#.WXYAMISGPIU

ConcurrentHashMap Internal Working

Image
ConcurrentHashMap utilizes the same principles of HashMap, but is designed primarily for a multi-threaded application and hence it does not require explicit synchronization.   The only thread safe collection objects were Hashtable and synchronized Map prior to JDK 5. Before learning How ConcurrentHashMap works in Java , we need to look at why concurrentHashMap is added to the Java SDK. Why we need ConcurrentHashMap when we already had Hashtable ? Hashtable provides concurrent access to the Map.Entries objects by locking the entire map to perform any sort of operation (update,delete,read,create). Suppose we have a web application , the overhead created by Hashtable  (locking the entire map) can be ignored under normal load. But under heavy load , the overhead of locking the entire map may prove fatal and may lead to delay response time and   overtaxing of the server. This  is where ConcurrentHashMap comes to rescue. According to ConcurrentHashMap Oracle docs, ConcurrentHashMa

HashSet Internal Working : How HashSet ensure Uniqueness

HashSet is an implementation of Set interface in Java which means in Set duplicate elements can not be added. If we try to add duplicate element then HashSet will discard the newly duplicated element to be added. But the question is How HashSet does that , How does it find out about the duplicate element and discard it from getting added. Let's see the internal working of HashSet to find the asnwer. When you open the HashSet implementation of the add() method in Java Apis that is rt.jar , you will find the following code in it. So , we are achieving uniqueness in Set,internally in java  through HashMap . Whenever you create an object of HashSet it will create an object of HashMap as you can see in the italic lines in the above code . As we know in HashMap each key is unique . So what we do in the set is that we pass the argument in the add(Elemene E) that is E as a key in the HashMap . Now we need to associate some value to the key , so what Java apis developer di

Database/SQL Interview Questions

Image
How will you search for a word in a very large database ? 1. First the group of related files are stored in different bucket using hashing . Then in particular bucket we sort the files using some sorting method if the files are not in alphabetical order . 2. Now comes the searching part . Using hash function we can reach to bucket in O(1) time. 3. As we have already sorted the files in particular bucket so we will apply binary search to  reach to exact file. Binary Search will take O(logn) time . Then in the file we can apply different searching algorithm to reach to the word . Explain the database to your 5 year old child in three sentence. 1. It is like shelf in which you put your toys with number of drawer for different kind of toys. for eg for green toys you put them in drawer no 1 and to the red toys you put them in drawer no .2. 2. Every drawer has a name slip on it like green ,yellow  etc so that you can easily search different color toy. 3.Your self has  a  lock and y

SOLID Design Principle

SOLID principles are the set of principle fist given by Robert.C.Martin. SOLID principle as name given its set of principle that allows building SOLID software system. Unlike OOD and Design Pattern, SOLID principles are related with the design and maintenance of software system.SOLID prinicples helps in designing a software which is easy to maintain, easy to expand , easy to understand, easy to implement, easy to explain. S.O.L.I.D is acronym for Single Responsibility Principle :  Principle is related to Designing software module, class or function that should perform only one task. So this principle is about Creation. Open/Close Principle :  Principle is applied after 1 (Single Responsibility Principle), again this principle is related to Designing module, class or function. But it about closing already designed thing for modification but opening designed thing for extension i.e. extending functionality. So this principle is about extension. Liskov Substitution Principle : 

Classloader in Java

Image
ClassLoader in Java is a class which is used to load class files in Java. Java code is compiled into class file by javac compiler and JVM executes Java program, by executing byte codes written in class file. ClassLoader is responsible for loading class files from file system, network or any other source. There are three default class loader used in Java, Bootstrap , Extension and System or Application class loader. ClassLoader main responsibilities are as below. 1) Loading 2) Linking 3) Initialization Loading :  The Class loader reads the .class file, generate the corresponding binary data and save it in method area. For each .class file, JVM stores following information in method area. Fully qualified name of the loaded class and its immediate parent class.Whether .class file is related to Class or Interface or Enum Modifier, Variables and Method information etc. After loading .class file, JVM creates an object of type Class to represent this file in the heap memory. Please n

How JVM Works – JVM Architecture?

Image
JVM(Java Virtual Machine) acts as a run-time engine to run Java applications.JVM is a part of JRE(Java Run Environment).Java applications are called WORA (Write Once Run Everywhere). This means a programmer can develop Java code on one system and can expect it to run on any other Java enabled system without any adjustment. This is all possible because of JVM. What exactly is JVM ? A specification where working of Java Virtual Machine is specified. But implementation provider is independent to choose the algorithm. Its implementation has been provided by Sun and other companies. An implementation Its implementation is known as JRE (Java Runtime Environment). Runtime Instance Whenever you write java command on the command prompt to run the java class, an instance of JVM is created. Following is the structure of java code execution .    Initially the java source code is compiled by compiler and converted into .class file which contains bytecode. After that .class fil

PermGen Vs MetaSpace

PermGen Prior to Java 8 there existed a special space called the ‘Permanent Generation’. This is where the metadata such as classes would go. Also, some additional things like internalized strings were kept in Permgen. Note that Perm Gen is not part of Java Heap memory.Perm Gen is populated by JVM at runtime based on the classes used by the application. Perm Gen also contains Java SE library classes and methods. Perm Gen objects are garbage collected in a full garbage collection. It actually used to create a lot of trouble to Java developers, since it is quite hard to predict how much space all of that would require. Result of these failed predictions took the form of java.lang.OutOfMemoryError: Permgen space . Unless the cause of such OutOfMemoryError was an actual memory leak, the way to fix this problem was to simply increase the permgen size similar to the following example setting the maximum allowed permgen size to 256 MB:  java -XX:MaxPermSize=256m com.mycompany.MyApplicat

Garbage Collection Internal Working

Image
In Java Garbage Collector was created based on the following two hypotheses. (It is more correct to call them suppositions or preconditions, rather than hypotheses.)  Most objects soon become unreachable. References from old objects to young objects only exist in small numbers. These observations come together in the Weak Generational Hypothesis. Based on this hypothesis, the memory inside the VM is divided into what is called the Young Generation and the Old Generation. The latter is sometimes also called Tenured. Since the GC algorithms are optimized for objects which either ‘die young’ or ‘are likely to live forever’, the JVM behaves rather poorly with objects with ‘medium’ life expectancy. Memory Pools The following division of memory pools within the heap should be familiar. What is not so commonly understood is how Garbage Collection performs its duties within the different memory pools. Notice that in different GC algorithms some implementation details might

Garbage Collection

In Java, the programmer need not to care for all those objects which are no longer in use. Garbage collector destroys these objects.Main objective of Garbage Collector is to free heap memory by destroying unreachable objects.Garbage collector is best example of Daemon thread as it is always running in background. Advantage of Garbage Collection It makes java memory efficient because garbage collector removes the unreferenced objects from heap memory. It is automatically done by the garbage collector(a part of JVM) so we don't need to make extra efforts. As we have seen that the Garbage collector destroy the object which have no reference in the memory. So now we will the different ways to Unreferenced the object. There are many ways: By nulling the reference By assigning a reference to another By annonymous object etc. Isolation Island 1) By nulling a reference: Integer i = new Integer(4); // the new Integer object is reachable via the reference

HTTP : Must known Protocol (Part 1)

Image
Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems.Basically, HTTP is a TCP/IP based communication protocol, that is used to deliver data (HTML files, image files, query results, etc.) on the World Wide Web.The default port is TCP 80, but other ports can be used as well. It provides a standardized way for computers to communicate with each other. HTTP specification specifies how clients' request data will be constructed and sent to the server, and how the servers respond to these requests. TCP/IP is responsible for breaking up the packets  into small pieces and taking them to the correct place or ip address where they are supposed to reach. Basic Features of HTTP : HTTP is connectionless: The HTTP client, i.e., a browser initiates an HTTP request and after a request is made, the client disconnects from the server and waits for a response. The server processes the request and re-establishes

Apache Kafka Vs ActiveMQ

ActiveMQ and Kafka are two different kind of Messaging System. ActiveMQ has been for long in the industry with full stability and so rich feature whereas Kafka is the new beast which bring a totally new flavour to handle the large volume of data and high throughput. Lets starts with the similarities and differencies between ActiveMQ and Apache Kafka 1) JMS : ActiveMQ is an implementation of JMS(Java Messaging Service) API whereas Apache Kafka is totally different distributed message system which uses its own protocol. 2) Performance : Message publishing speed in ActiveMQ is 300 msg/sec over  a single thread whereas Message publishing speed in Kafka is 165k  msg/sec over a single thread. 3) Message Ordering  :  Kafka ensures that the messages are received in the order in which they were sent at the partition level. JMS does not have any such contracts. 4) Persistence of Message :  Kafka brokers store the messages for a specified period of time irrespective of whet

Apache Kafka : Distributed Messaging System

Image
Apache Kafka is a distributed publish-subscribe messaging system which can handle a large volume of data and can send data from one point to another point. Kafka messages are persisted on the disk and replicated within the cluster to prevent data loss Kafka is a fast , scalable , distributed in nature by its design, partitioned and replicated commit log service. Benefits of using Kafka : Reliability   : Kafka is distributed, partitioned, replicated and fault tolerance. Scalability :  Kafka messaging system scales easily without down time.. Durability :  Kafka uses Distributed commit log which means messages persists on disk as fast as possible, hence it is durable.. Performance :  Kafka has high throughput for both publishing and subscribing messages. It maintains stable performance even many TB of messages are stored. In comparison to other messaging systems, Kafka has better throughput, built-in partitioning, replication and inherent fault-tolerance, which makes it a