If You Have Spring Boot Rest Services Would You Use A Singleton and Why?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

1. If you have spring boot rest services would you use a singleton and why?

Yes,

REST is meant to be stateless. In other words, each request contains all the information it
needs for the server to handle it. Knowing that, it's pointless for the server
(or @Controller) to keep any information after it's finished handling the request in instance
fields and the like. Therefore, a singleton is the way to go.
Controllers are just beans, they can be singleton or prototype, it depends on what you are
trying to do. If you want stateless use singleton, by default they are singleton.
Spring beans are, by default, singleton scoped. Since component scanning
a @Controller annotated class simply generates a bean, that bean will be singleton scoped.
2. REST APIs

Application programming interfaces (APIs) provide the platform and medium for applications to
talk to and understand each other. APIs specify the way information passed across platforms is
structured so that applications can exchange data and information.

REST is an API architecture style. It stands for representational state transfer. REST specifies
how data is presented to a client in a format that is convenient for the client. Data exchanges
happen either in JSON or XML format.

A basic knowledge of application development in any server-side language will be helpful.


Application programming interfaces (APIs) provide the platform and medium for applications to
talk to and understand each other. APIs specify the way information passed across platforms is
structured so that applications can exchange data and information.

REST is an API architecture style. It stands for representational state transfer. REST specifies
how data is presented to a client in a format that is convenient for the client. Data exchanges
happen either in JSON or XML format. However, many APIs today return JSON data, so we will
focus on that in this guide.

The API specifies a set of rules for one application to interact with another. Many APIs have
proper documentation that also describes the nature and structure of the response they send
when you make a request. They also specify the necessary information a requesting application
needs to provide to make a successful request to the API.

In effect, RESTful APIs function just about the same way standard TCP/IP requests function,
except there are no clients and servers here, but just two applications talking to each other.

There are a number of patterns for designing APIs. These patterns have history, different
requirements and create different experiences for users. These designs are somehow
interconnected with each other, so we will see many places where they are very similar.
Understanding them will help you in making a decision of which to use to address your specific
issues.

REST

REST is truly a “web services” API putting it on the opposite side of SOAP (probably why you
will see a dozen comparisons on them without looking too much). REST APIs are based on
URIs (uniform resource identifier) and the HTTP protocol. REST APIs can exchange data in
either JSON or XML format, although many REST APIs send data as JSON.

When building a system with minimal security considerations but strong speed requirements,
REST is an excellent choice. RESTful APIs have fewer security requirements, boost browser
client compatibility, discoverability, data health, and scalability—things that really apply to web
services.

RESTful APIs — design considerations


We had defined and used a few simple RESTful API endpoints in the previous article
on Understanding HTTP response codes in frontend applications - Part 2. Please read the
article before continuing with the rest of this article.

Designing APIs are an entirely different thing. There are best practices to consider to ensure
your APIs are easy for anyone to consume. Let’s list of five of them below:

Use plural nouns and not verbs

As an example, imagine you want a service for stock control of fruits. Here is a simple chart of
basic REST APIs you can make for your fruit application

POST
Resource GET read PUT update DELETE
create
Bulk update of
Returns a list of fruits Delete all fruits (you
Create a fruits (you may
/fruits possibly all the fruits you should probably not
new fruit never have to do
have implement this)
this)
Method not
Updates a specific Deletes a specific
/fruits/1004 Returns a specific fruit allowed
fruit fruit
(405)
It’s not good practice to have API endpoints like /getFruits or /delete-
fruits or /createfruits. Instead, take advantage of HTTP verbs for the definitions. This
means I can easily build upon your APIs and have fewer resources to manage. This approach is
less ambiguous and cleaner.

Use sub-resources for relations

Say you are designing a social network where users have blogs, you should establish
relationships like this:

GET /users/1004/blogs

This should fetch all the user’s blogs. If you want to fetch a specific blog, you can do the
following:

GET /users/1004/blogs/1010

Handle errors with HTTP status codes

We did a full guide on HTTP status codes explaining their use cases. You need to understand
that it is hard to work with an API that ignores error handling. Returning the wrong status codes,
or returning a stack trace without a helpful message highlighting the error does not help the user
of the API. It does not matter if you have documentation or not.

Provide pagination, sorting and filtering


This is a no brainer. Your application will likely have a huge repository of information. I do not
want to call your endpoints to get all fruits and have 100,000 fruits sent to me at the same time.
You do not want to do that even. The overload on your server will be crazy and you will burn up
your resources rapidly.

Basics about API security

I know I had mentioned earlier that RESTful APIs have few security considerations, but that
does not mean we should ignore security completely. We will approach API security from two
perspectives: authentication and authorization.

Authentication

This has to do with verifying the identity of the person trying to access your API. This can be
done through multiple ways. At the simplest form, this is providing a username/email and
password combination. For APIs, it will most likely involve using a security token which will
identify a user. More complex scenarios will have key/secret pairs often used to integrate one
application into another.

Authorization

This answers the question of “What can you do”, indicating it comes after authentication.
Authorization comes in handy when you are designing endpoints for accessing data that is
either very specific to a particular person or sensitive information only a predefined set of people
can access.

3. Hashmap, what are the properties of Keys in hashmap?


(How to design good custom key object for HashMap)

a. The contract between hashCode() and equals()

The very basic need for designing a good key is that “we should be able to
retrieve the value object back from the map without failure“, otherwise no
matter how fancy data structure you build, it will be of no use. To decide that we
have created a good key, we MUST know that “how HashMap works?”. I will leave,
how hashmap works, part on you to read from linked post, but in summary it works
on principle of Hashing.

Key’s hash code is used primarily in conjunction to its equals() method, for putting


a key in map and then getting it back from map. So, our only focus point is these
two methods. So if hash code of key object changes after we have put a key value
pair in map, then its almost impossible to fetch the value object back from map. It
is a case of memory leak.

On runtime, JVM computes hash code for each object and provide it on demand.
When we modify an object’s state, JVM set a flag that object is modified and hash
code must be AGAIN computed. So, next time you call object’s hashCode() method,
JVM recalculate the hash code for that object.

b. Make HashMap key object immutable

For above basic reasoning, key objects are suggested to be IMMUTABLE.


Immutability allows you to get same hash code every time, for a key object. So it
actually solves most of the problems in one go. Also, this class must honor
the hashCode() and equals() methods contract.

This is the main reason why immutable classes like String, Integer or other wrapper
classes are a good key object candidate. and it is the answer to question why string
is popular hashmap key in java?

But remember that immutability is recommended and not mandatory. If you


want to make a mutable object as key in hashmap, then you have to make sure
that state change for key object does not change the hash code of object. This can
be done by overriding the hashCode() method. But, you must make sure you are
honoring the contract with equals() also.

How to create immutable class in Java


An immutable class is one whose state can not be changed once created. There are
certain guidelines to create a class immutable in Java.

1. Don’t provide “setter” methods — methods that modify fields or objects referred to by
fields.

This principle says that for all mutable properties in your class, do not provide
setter methods. Setter methods are meant to change the state of an object and
this is what we want to prevent here.

2. Make all fields final and private

This is another way to increase immutability. Fields declared private will not be


accessible outside the class and making them final will ensure the even
accidentally you can not change them.

3. Don’t allow subclasses to override methods


The simplest way to do this is to declare the class as final. Final classes in java can
not be extended.

4. Special attention when having mutable instance variables

Always remember that your instance variables will be


either mutable or immutable. Identify them and return new objects with copied
content for all mutable objects. Immutable variables can be returned safely
without extra effort.

A more sophisticated approach is to make the constructor private and construct


instances in factory methods.

Immutable classes in JDK

Apart from your written classes, JDK itself has lots of immutable classes. Given is such a
list of immutable classes in Java.

1. String
2. Wrapper classes such as Integer, Long, Double etc.
3. Immutable collection classes such as Collections.singletonMap() etc.
4. java.lang.StackTraceElement
5. Java enums (ideally they should be)
6. java.util.Locale
7. java.util.UUID

Benefits of making a class immutable


Lets first identify advantages of immutable class. In Java, immutable classes are:

1. are simple to construct, test, and use


2. are automatically thread-safe and have no synchronization issues
3. do not need a copy constructor
4. do not need an implementation of clone
5. allow hashCode() to use lazy initialization, and to cache its return value
6. do not need to be copied defensively when used as a field
7. make good Map keys and Set elements (these objects must not change state while in the
collection)
8. have their class invariant established once upon construction, and it never needs to be
checked again
9. always have “failure atomicity” (a term used by Joshua Bloch) : if an immutable object
throws an exception, it’s never left in an undesirable or indeterminate state

How HashMap works in Java

What is Hashing?

Hashing in its simplest form, is a way to assigning a unique code for any variable/object after applying any
formula/algorithm on its properties.

A true hash function must follow this rule –

“Hash function should return the same hash code each and every time when the function is applied on
same or equal objects. In other words, two equal objects must produce the same hash code consistently.”

All objects in Java inherit a default implementation of hashCode() function defined in Object class. This
function produces hash code by typically converting the internal address of the object into an integer,
thus producing different hash codes for all different objects.

 Entry class in HashMap


A map by definition is : “An object that maps keys to values”. Very easy.. right?

So, there must be some mechanism in HashMap to store this key-value pair. The answer is
YES. HashMap has an nested static class Entry, which looks like this:

Entry class
static class Entry<K ,V> implements Map.Entry<K, V>
{
    final K key;
    V value;
    Entry<K ,V> next;
    final int hash;
    ...//More code goes here
}

Surely Entry class has key and value mapping stored as attributes. key has been marked


as final and two more fields are there: next and hash.
How HashMap.put() methods works internally

Before going into put() method’s implementation, it is very important to learn that instances


of Entry class are stored in an array. HashMap class defines this variable as:

entry array

/**

 * The table, resized as necessary. Length MUST Always be a power of two.

 */

transient Entry[] table;

Now look at code implementation of put() method:

put() method

/**

* Associates the specified value with the specified key in this map. If the

* map previously contained a mapping for the key, the old value is

* replaced.

* @param key

*            key with which the specified value is to be associated

* @param value

*            value to be associated with the specified key

* @return the previous value associated with <tt>key</tt>, or <tt>null</tt>

*         if there was no mapping for <tt>key</tt>. (A <tt>null</tt> return

*         can also indicate that the map previously associated

*         <tt>null</tt> with <tt>key</tt>.)

*/

public V put(K key, V value) {

    if (key == null)

    return putForNullKey(value);

    int hash = hash(key.hashCode());

    int i = indexFor(hash, table.length);

    for (Entry<K , V> e = table[i]; e != null; e = e.next) {


        Object k;

        if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {

            V oldValue = e.value;

            e.value = value;

            e.recordAccess(this);

            return oldValue;

        }

    }

    modCount++;

    addEntry(hash, key, value, i);

    return null;

4.1. What put() method does

Let’s note down the internal working of put method in hashmap.

1. First of all, the key object is checked for null. If the key is null, the value is stored


in table[0] position. Because hashcode for null is always 0.
2. Then on next step, a hash value is calculated using the key’s hash code by calling
its hashCode() method. This hash value is used to calculate the index in the array for
storing Entry object. JDK designers well assumed that there might be some poorly
written hashCode() functions that can return very high or low hash code value. To
solve this issue, they introduced another hash() function and passed the object’s hash
code to this hash() function to bring hash value in the range of array index size.
3. Now indexFor(hash, table.length) function is called to calculate exact index position
for storing the Entry object.

4.2. How collisions are resolved

Here comes the main part. Now, as we know that two unequal objects can have the same
hash code value, how two different objects will be stored in same array location
called bucket.

The answer is LinkedList. If you remember, Entry class had an attribute "next". This


attribute always points to the next object in the chain. This is exactly the behavior
of LinkedList.
1. So, in case of collision, Entry objects are stored in linked list form. When
an Entry object needs to be stored in particular index, HashMap checks whether there
is already an entry?? If there is no entry already present, the entry object is stored in
this location.

If there is already an object sitting on calculated index, its next attribute is checked. If


it is null, and current entry object becomes next node in linkedlist. If next variable is
not null, procedure is followed until next is evaluated as null.

2. What if we add the another value object with same key as entered before. Logically,
it should replace the old value. How it is done? Well, after determining the index
position of Entry object, while iterating over linkedist on calculated
index, HashMap calls equals method on key object for each entry object.

All these entry objects in linkedlist will have similar hashcode but equals() method will
test for true equality. If key.equals(k) will be true then both keys are treated as
same key object. This will cause the replacing of value object inside entry object only.

In this way, HashMap ensure the uniqueness of keys.

How HashMap.get() methods works internally?

Now we know how key-value pairs are stored in HashMap. Next big question is : what
happens when an object is passed in get() method of HashMap? How the value object is
determined?

Answer we already should know that the way key uniqueness is determined in put() method
, same logic is applied in get() method also. The moment hashmap identify the exact match
for the key object passed as an argument, it simply returns the value object stored in
current Entry object.

If no match is found, get() method returns null.

get() method

/**

* Returns the value to which the specified key is mapped, or {@code null}

* if this map contains no mapping for the key.

* <p>

* More formally, if this map contains a mapping from a key {@code k} to a

* value {@code v} such that {@code (key==null ? k==null :

* key.equals(k))}, then this method returns {@code v}; otherwise it returns

* {@code null}. (There can be at most one such mapping.)

*
* </p><p>

* A return value of {@code null} does not <i>necessarily</i> indicate that

* the map contains no mapping for the key; it's also possible that the map

* explicitly maps the key to {@code null}. The {@link #containsKey

* containsKey} operation may be used to distinguish these two cases.

* @see #put(Object, Object)

*/

public V get(Object key) {

    if (key == null)

    return getForNullKey();

    int hash = hash(key.hashCode());

    for (Entry<K , V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) {

        Object k;

        if (e.hash == hash && ((k = e.key) == key || key.equals(k)))

            return e.value;

    }

    return null;

Above code is same as put() method till if (e.hash == hash && ((k = e.key) == key ||
key.equals(k))), after this simply value object is returned.

6. Key notes on internal working of HashMap

1. Data structure to store entry objects is an array named table of type Entry.


2. A particular index location in array is referred as bucket, because it can hold the first
element of a linkedlist of entry objects.
3. Key object’s hashCode() is required to calculate the index location of Entry object.
4. Key object’s equals() method is used to maintain uniqueness of keys in map.
5. Value object’s hashCode() and equals() method are not used in
HashMap’s get() and put() methods.
6. Hash code for null keys is always zero, and such entry object is always stored in zero
index in Entry[].
7. [Update] HashMap improvements in Java 8

As part of the work for JEP 180, there is a performance improvement for HashMap objects
where there are lots of collisions in the keys by using balanced trees rather than linked lists
to store map entries. The principal idea is that once the number of items in a hash
bucket grows beyond a certain threshold, that bucket will switch from using a
linked list of entries to a balanced tree. In the case of high hash collisions, this will
improve worst-case performance from O(n) to O(log n).

Basically when a bucket becomes too big (currently: TREEIFY_THRESHOLD = 8),


HashMap dynamically replaces it with an ad-hoc implementation of the treemap. This way
rather than having pessimistic O(n) we get much better O(log n).

Bins (elements or nodes) of TreeNodes may be traversed and used like any others, but
additionally support faster lookup when overpopulated. However, since the vast majority of
bins in normal use are not overpopulated, checking for the existence of tree bins may be
delayed in the course of table methods.

Tree bins (i.e., bins whose elements are all TreeNodes) are ordered primarily by hashCode,
but in the case of ties, if two elements are of the same “class C implements
Comparable<C>“, type then their compareTo() method is used for ordering.

Because TreeNodes are about twice the size of regular nodes, we use them only when bins
contain enough nodes. And when they become too small (due to removal or resizing) they
are converted back to plain bins (currently: UNTREEIFY_THRESHOLD = 6). In usages
with well-distributed user hashCodes, tree bins are rarely used.

So now we know how HashMap works internally in java 8.

Java Objects

Object class is present in java.lang package. Every class in Java is directly or indirectly


derived from the Object class. If a Class does not extend any other class then it is direct
child class of Object and if extends other class then it is an indirectly derived. Therefore
the Object class methods are available to all Java classes. Hence Object class acts as a
root of inheritance hierarchy in any Java Program.
Using Object class methods

There are methods in Object class:


1. toString() : toString() provides String representation of an Object and used to convert an
object to String. The default toString() method for class Object returns a string consisting
of the name of the class of which the object is an instance, the at-sign character `@’, and
the unsigned hexadecimal representation of the hash code of the object. In other words, it
is defined as:
// Default behavior of toString() is to print class name, then
// @, then unsigned hexadecimal representation of the hash code
// of the object
public String toString()
{
return getClass().getName() + "@" + Integer.toHexString(hashCode());
}

It is always recommended to override toString() method to get our own String


representation of Object. For more on override of toString() method refer
– Overriding toString() in Java
Note : Whenever we try to print any Object reference, then internally toString()
method is called.
Student s = new Student();

// Below two statements are equivalent


System.out.println(s);
System.out.println(s.toString());
2. hashCode() : For every object, JVM generates a unique number which is hashcode. It
returns distinct integers for distinct objects. A common misconception about this method
is that hashCode() method returns the address of object, which is not correct. It convert
the internal address of object to an integer by using an algorithm. The hashCode() method
is native because in Java it is impossible to find address of an object, so it uses native
languages like C/C++ to find address of the object.
Use of hashCode() method : Returns a hash value that is used to search object in a
collection. JVM(Java Virtual Machine) uses hashcode method while saving objects
into hashing related data structures like HashSet, HashMap, Hashtable etc. The
main advantage of saving objects based on hash code is that searching becomes
easy.
Note : Override of hashCode() method needs to be done such that for every object
we generate a unique number. For example,for a Student class we can return roll
no. of student from hashCode() method as it is unique.

// Java program to demonstrate working of

// hasCode() and toString()

public class Student

    static int last_roll = 100; 


    int roll_no;

  

    // Constructor

    Student()

    {

        roll_no = last_roll;

        last_roll++;

    }

  

    // Overriding hashCode()

    @Override

    public int hashCode()

    {

        return roll_no;

    }

  

    // Driver code

    public static void main(String args[])

    {

        Student s = new Student();

  

        // Below two statements are equivalent

        System.out.println(s);

        System.out.println(s.toString());

    }

Output :
Student@64
Student@64

Note that 4*160 + 6*161 = 100


3.equals(Object obj) : Compares the given object to “this” object (the object on which the
method is called). It gives a generic way to compare objects for equality. It is
recommended to override equals(Object obj) method to get our own equality condition on
Objects. For more on override of equals(Object obj) method refer – Overriding equals
method in Java
Note : It is generally necessary to override the hashCode() method whenever this
method is overridden, so as to maintain the general contract for the hashCode
method, which states that equal objects must have equal hash codes.
4. getClass() : Returns the class object of “this” object and used to get actual runtime class
of the object. It can also be used to get metadata of this class. The returned Class object
is the object that is locked by static synchronized methods of the represented class. As it
is final so we don’t override it.

// Java program to demonstrate working of getClass()

public class Test

    public static void main(String[] args)

    {

        Object obj = new String("GeeksForGeeks");

        Class c = obj.getClass();

        System.out.println("Class of Object obj is : "

                           + c.getName());

    }

Output:
Class of Object obj is : java.lang.String

Note :After loading a .class file, JVM will create an object of the
type java.lang.Class in the Heap area. We can use this class object to get Class
level information. It is widely used in Reflection
5.finalize() method : This method is called just before an object is garbage collected. It is
called by the Garbage Collector on an object when garbage collector determines that there
are no more references to the object. We should override finalize() method to dispose
system resources, perform clean-up activities and minimize memory leaks. For example
before destroying Servlet objects web container, always called finalize method to perform
clean-up activities of the session.
Note :finalize method is called just once on an object even though that object is eligible for
garbage collection multiple times.

// Java program to demonstrate working of finalize()


public class Test

    public static void main(String[] args)

    {

        Test t = new Test();

        System.out.println(t.hashCode());

  

        t = null;

  

        // calling garbage collector 

        System.gc();

  

        System.out.println("end");

    }

  

    @Override

    protected void finalize()

    {

        System.out.println("finalize method called");

    }

Output:
366712642
finalize method called
end
6.clone() : It returns a new object that is exactly the same as this object. For clone() method
refer Clone()
7.The remaining three methods wait(), notify() notifyAll() are related to Concurrency.
Refer Inter-thread Communication in Java for details.
SELECT Id, Salary FROM Employee e WHERE 2=(SELECT COUNT(DISTINCT
Salary) FROM Employee p WHERE e.Salary<=p.Salary)

SELECT Salary FROM (SELECT Salary FROM Employee ORDER BY salary DESC LIMIT 2)
AS Emp ORDER BY salary LIMIT 1;

In this solution, we have first sorted all salaries form Employee table in decreasing order, so that
2 highest salaries come at top of the result set. After that we took just two records by using
LIMIT 2. Again we did the same thing but this time we sort the result set on ascending order, so
that second highest salary comes at top. Now we print that salary by using LIMIT 1. Simple and
easy, right?

Second Highest Salary using SQL Server Top Keyword


Just like MySQL has LIMIT keyword, which is immensely helpful in sorting and paging, Microsoft
SQL Server also has a special keyword called TOP, which as name suggest prints top records
from result set. You can print top 10 records by saying TOP 10. I frequently use this keyword to
see the data from a large table, just to understand columns and data inside it. Here is the SQL
query to find second maximum salary in SQL Server :

SELECT TOP 1 Salary FROM ( SELECT TOP 2 Salary FROM Employee ORDER BY Salary
DESC) AS MyTable ORDER BY Salary ASC;

Here is the output of above query running on Microsoft SQL Server 2014 :
Difference between String literal and New String object in Java

String is a special class in Java API and has so many special behaviors which is not obvious to
many programmers. In order to master Java, first step is to master String class, and one way to
explore is checking what kind of String related questions are asked on Java interviews. Apart from
usual questions like why String is final, or  equals vs == operator, one of the most frequently asked
question is what is difference between String literal and String object in Java. For example, what is
the difference between String object created in following two expression :

String strObject = new String("Java");


and

String strLiteral = "Java";


Both expression gives you String object, but there is subtle difference between them. When you
create String object using new() operator, it always create a new object in heap memory. On the
other hand, if you create object using String literal syntax e.g. "Java", it may return an existing
object from String pool (a cache of String object in Perm gen space, which is now moved to heap
space in recent Java release), if it's already exists. Otherwise it will create a new string object and
put in string pool for future re-use. In rest of this article, why it is one of the most important thing you
should remember about String in Java.
What is String literal and String Pool
Since String is one of the most used type in any application, Java designer took a step further to
optimize uses of this class. They know that Strings will not going to be cheap, and that's why they
come up with an idea to cache all String instances created inside double quotes e.g. "Java". These
double quoted literal is known as String literal and the cache which stored these String instances are
known as as String pool. In earlier version of Java, I think up-to Java 1.6 String pool is located in
permgen area of heap, but in Java 1.7 updates its moved to main heap area. Earlier since it was in
PermGen space, it was always a risk to create too many String object, because its a very limited
space, default size 64 MB and used to store class metadata e.g. .class files. Creating too many
String literals can cause java.lang.OutOfMemory: permgen space. Now because String pool is
moved to a much larger memory space, it's much more safe. By the way, don't misuse memory
here, always try to minimize temporary String object e.g. "a", "b" and then "ab". Always use
StringBuilder to deal with temporary String object.

Difference between String literal and String object

At high level both are String object, but main difference comes from the point that new() operator
always creates a new String object. Also when you create String using literal they are interned. This
will be much more clear when you compare two String objects created using String literal and new
operator, as shown in below example :

String a = "Java";
String b = "Java";
System.out.println(a == b); // True

Here two different objects are created and they have different references:
String c = new String("Java");
String d = new String("Java");
System.out.println(c == d); // False

Similarly when you compare a String literal with an String object created using new() operator
using == operator, it will return false, as shown below :

String e = "JDK";
String f = new String("JDK");
System.out.println(e == f); // False

In general you should use the string literal notation when possible. It is easier to read and it gives the
compiler a chance to optimize your code. By the way any answer to this question is incomplete until
you explain what is String interning, so let's see that in next section.

String interning using inter() method


Java by default doesn't put all String object into String pool, instead they gives you flexibility to
explicitly store any arbitrary object in String pool. You can put any object to String pool by
calling intern() method of java.lang.String class. Though, when you create using String
literal notation of Java, it automatically call intern() method to put that object into String pool,
provided it was not present in the pool already. This is another difference between string literal and
new string, because in case of new, interning doesn't happen automatically, until you
call intern() method on that object. Also don't forget to use StringBuffer and StringBuilder for
string concatenation, they will reduce number

That's all about this question, what is difference between String literal and String object in Java.
Always remember that literal Strings are returned from string pool and Java put them in pool if not
stored already. This difference is most obvious, when you compare two String objects using equality
operator (==). That's why it's suggested as always compare two String object
using equals() method and never compare them using == operator, because you never know
which one is coming from pool and which one is created using new() operator. If you know
the difference between string object and string literal, you can also solve questions from Java written
test, which also test this concept. It's something, every Java programmer should know.  of temporary
String object in heap space.
Understanding the Java
ConcurrentHashMap

What is Concurrent HashMap in Java


The ConcurrentHashMap class provides a concurrent version of
the standard HashMap. So its functionality is similar to a
HashMap, except that it has internally maintained concurrency.
Depending upon the level of concurrency required the concurrent
HashMap is internally divided into segments. If the level of
concurrency required is not specified then it is takes 16 as the
default value. So internally the ConcurrentHashMap will be
divided into 16 segments. Each Segment behaves independently.
We will take a look at the segments in the below examples.

Class Hierarchy
java.lang.Object
java.util.AbstractMap<K,V>
java.util.concurrent.ConcurrentHashMap<K,V>

Type Parameters:
K - the type of keys maintained by this map
V - the type of mapped values
All Implemented Interfaces: Serializable, ConcurrentMap<K,V>,
Map<K,V>

public ConcurrentHashMap()-
Creates a new, empty map with a default initial capacity (16),
load factor (0.75) and concurrencyLevel (16).
Why use Concurrent Hashmap-
We use ConcurrentHashMap when a high level of concurrency is
required. But already SynchronizedMap is present so what
advantages does ConcurrentHashMap have over synchronized
map.Both are thread safe. The major advantage is in case of
synchronizedMap every write operation acquires lock on entire
SynchronizedMap while in case of ConcurrentHashMap the lock
is only on one of the segments.
SynchronizedMap-

ConcurrentHashMap-
Lets see how synchronizedMap behaves in a multithreaded
environment.
Create a class which implements Runnable class takes instance
of Map and insert value in it.

package com.javainuse.helper;

import java.util.Map;

public class MapHelper1 implements Runnable {


Map<String, Integer> map;

public MapHelper1(Map<String, Integer> map) {


this.map = map;
new Thread(this, "MapHelper1").start();
}

public void run() {


map.put("One", 1);
try {
System.out.println("MapHelper1 sleeping");
Thread.sleep(100);
} catch (Exception e) {
System.out.println(e);
}

}
}

Create multiple such runnable classes


MapHelper2,MapHelper3,MapHelper4.
Now we will create an instance of SynchronizedMap and test it
as follows-

package com.javainuse.main;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

import com.javainuse.helper.MapHelper1;
import com.javainuse.helper.MapHelper2;
import com.javainuse.helper.MapHelper3;
import com.javainuse.helper.MapHelper4;

public class MainClass {

public static void main(String[] args) {


Map<String, Integer> hashMap = new HashMap<>();

Map<String, Integer> syncMap =


Collections.synchronizedMap(hashMap);
MapHelper1 mapHelper1 = new MapHelper1(syncMap);
MapHelper2 mapHelper2 = new MapHelper2(syncMap);
MapHelper3 mapHelper3 = new MapHelper3(syncMap);
MapHelper4 mapHelper4 = new MapHelper4(syncMap);

for (Map.Entry<String, Integer> e : syncMap.entrySet()) {


System.out.println(e.getKey() + "=" + e.getValue());
}

On running the above example we get an exception as follows-

We get this exception because the while reading the contents of


the SynchronizedMap, it is modified by other thread. Now lets see
the behaviour of ConcurrentHashMap.

package com.javainuse.main;

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import com.javainuse.helper.MapHelper1;
import com.javainuse.helper.MapHelper2;
import com.javainuse.helper.MapHelper3;
import com.javainuse.helper.MapHelper4;

public class MainClass {

public static void main(String[] args) {

Map<String, Integer> conMap = new


ConcurrentHashMap<>();
MapHelper1 mapHelper1 = new MapHelper1(conMap);
MapHelper2 mapHelper2 = new MapHelper2(conMap);
MapHelper3 mapHelper3 = new MapHelper3(conMap);
MapHelper4 mapHelper4 = new MapHelper4(conMap);

for (Map.Entry<String, Integer> e : conMap.entrySet()) {


System.out.println(e.getKey() + "=" + e.getValue());
}

Get the output as-


In case of Concurrent Map even if map is changed while iterating
it, no exception is thrown.

Internal Working of ConcurrentHashMap-


As no concurrency level has been set explictity, the
ConcurrentHashMap gets divided into 16 segments. And each
segment acts as an independent HashMap. During right operation
the Lock is obtained on this particular segment and not on the
entire HashMap Debug the code when it reaches the for Loop
above and check the conMap details-

As you can see the table contains the 16 segments. Out of these
only three are populated and remaining are null. As soon as new
value is to be inserted a new segment will then be populated. (In
previous versions of java, whether populated or not all the
sixteen segments would get initialized, but this has been
improved)
Difference between ConcurrentHashMap and
SynchronizedMap -
The major differences are-

SynchronizedHashMap ConcurrentHashMap

Synchronization is at Object level Synchronization is at seg

A lock is obtained for read/write operation at A lock is obtained for wr


object level segment level

Concurrency level cannot be set for better Concurrency level can be


optimization optimization

ConcurrentModification Exception is thrown if ConcurrentModification E


a thread tries to modify an existing if a thread tries to modify
SynchronizedMap which is being iterated ConcurrentHashMap whi

Since at a given time only a single Thread can Multiple Threads can mo
modify the map and block other threads the ConcurrentHashMap. He
performance is comparatively bad. much better.

Interviewer : Can two threads update the ConcurrentHashMap


simultaneously ?

                                                    
Yes it is possible that two threads can simultaneously write on the
ConcurrentHashMap. ConcurrentHashMap default implementation allows 16
threads to read and write in parallel.
But in the worst case scenario , when two objects lie in the same segment or
same partition, then parallel write would not be possible.

Interviewer : Why ConcurrentHashMap does not allow null keys and null
values ?

According to the author of the ConcurrentHashMap (Doug lea himself)

The main reason that nulls aren't allowed in ConcurrentMaps


(ConcurrentHashMaps, ConcurrentSkipListMaps) is that ambiguities that may
be just barely tolerable in non-concurrent maps can't be accommodated. The
main one is that if map.get(key) returns null, you can't detect whether the key
explicitly maps to null vs the key isn't mapped. In a non-concurrent map, you
can check this via map.contains(key), but in a concurrent one, the map might
have changed between 
calls.

In simple words,

The code is like this : 


 
if (map.containsKey(k)) {
return map.get(k);
} else {
throw new KeyNotPresentException();
}

It might be possible that key k might be deleted in between the get(k) and
containsKey(k) calls. As a result , the code will return null as opposed to
KeyNotPresentException (Expected Result if key is not present).

Interviewer : What is the difference between HashMap and


ConcurrentHashMap?

The HashMap was not thread safe and therefore could not be utilized in multi-
threaded applications.  The ConcurrentHashMap was introduced to overcome
this shortcoming and also as an alternative to using HashTable and
synchronized Maps for greater performance and uses the standard Hashing
algorithms to generate hash code for storing the key value pairs.For more
difference between HashMap and ConcurrentHashMap check this popular
interview question HashMap vs ConcurrentHashMap in java.

Interviewer : Can multiple threads read from the Hashtable


concurrently ?

No multiple threads can not read simultaneously from Hashtable. Reason, the
get() method of  Hashtable is synchronized. As a result , at a time only one
thread can access the get() method .
It is possible to achieve full  concurrency for reads (all the threads read at the
same time) in  ConcurrentHashMap by using volatile keyword.

Interviewer: Does ConcurrentHashMap Iterator behaves like fail fast


iterator or fail safe Iterator?

ConcurrentHashMap iterator behaves like fail safe iterator. It will not throw
ConcurrentModificationException . We have already discussed Fail Fast
Iterator vs Fail Safe Iterator.

Interviewer : Why does Java provide default value of partition count as


16 instead of very high value ?
According to Java docs ,

Ideally, you should choose a value to accommodate as many threads as will


ever concurrently modify the table. Using a significantly higher value than you
need can waste space and time, and a significantly lower value can lead to
thread contention.

Interviewer : Can you write the simple  example which proves


ConcurrentHashMap class behaves like fail safe iterator?
ConcurrentHashMap Example :

import java.util.concurrent.ConcurrentHashMap;
import java.util.Iterator;

public class ConcurrentHashMapExample


{

public static void main(String[] args)


{
ConcurrentHashMap<String,String> premiumPhone = new
ConcurrentHashMap<String,String>();
premiumPhone.put("Apple", "iPhone6");
premiumPhone.put("HTC", "HTC one");
premiumPhone.put("Samsung","S6");

Iterator iterator = premiumPhone.keySet().iterator();

while (iterator.hasNext())
{
System.out.println(premiumPhone.get(iterator.next()));
premiumPhone.put("Sony", "Xperia Z");
}

Output :

S6
HTC one
iPhone6
ifference between synchronized block and method in Java
Thread
Synchronized block and synchronized methods are two ways to use synchronized keyword in
Java and implement mutual exclusion on critical section of code. Since Java is mainly used to
write multi-threading programs,  which present various kinds of thread related issues like thread-
safety, deadlock and race conditions, which plagues into code mainly because of poor
understanding of synchronization mechanism provided by Java programming language. Java
provides inbuilt synchronized and volatile keyword to achieve synchronization in Java.
Main difference between synchronized method and synchronized block is selection of lock on
which critical section is locked.

Synchronized method depending upon whether its a static method or non static locks on
either class level lock or object lock. Class level lock is one for each class and
represented by class literal e.g. Stirng.class. Object level lock is provided by current object
like this instance, You should never mix static and non static synchronized method in Java.

On the other hand synchronized block locks on monitor evaluated by expression provided as
parameter to synchronized block. In next section we will see an example of both synchronized
method and synchronized block to understand this difference better.

Difference between synchronized method vs block in Java

Here are Some more differences between synchronized method and block in Java based upon
experience and syntactical rules of synchronized keyword in Java. Though both block and
method can be used to provide highest degree of synchronization in Java, use of synchronized
block over method is considered as better Java coding practices.

1) One significant difference between synchronized method and block is that, Synchronized
block generally reduce the scope of lock. As scope of lock is inversely proportional to
performance, its always better to lock only critical section of code. One of the best example of
using a synchronized block is double checked locking in Singleton pattern where instead of
locking whole getInstance() method we only lock critical section of code which is used to
create Singleton instance. This improves performance drastically because locking is only
required one or two times.
2) Synchronized block provide granular control over lock, as you can use arbitrary any lock to
provide mutual exclusion to critical section code. On the other hand synchronized method
always lock either on current object represented by this keyword  or class level lock, if its static
synchronized method.

3) Synchronized block can throw throw java.lang.NullPointerException if expression provided to


block as parameter evaluates to null, which is not the case with synchronized methods.

4) In case of synchronized method, lock is acquired by thread when it enter method and
released when it leaves method, either normally or by throwing Exception. On the other hand in
case of synchronized block, thread acquires lock when they enter synchronized block and
release when they leave synchronized block.

Synchronized method vs synchronized block Example in


Java
Here is an example of  sample class which shows on which object synchronized method and
block are locked and how to use them :

/**
  * Java class to demonstrate use of synchronization method and block
in Java
  */
public class SycnronizationExample{
 
 
    public synchronized void lockedByThis(){
        System.out.println(" This synchronized method is locked by
current" instance of object i.e. this");
    }
 
    public static synchronized void lockedByClassLock(){
        System.out.println("This static synchronized method is locked
by class level lock of this class i.e. SychronizationExample.class");

    }
 
    public void lockedBySynchronizedBlock(){
        System.err.println("This line is executed without locking");
     
        Object obj = String.class; //class level lock of Stirng class
     
        synchronized(obj){
            System.out.println("synchronized block, locked by lock
represented using obj variable");
        }
    }
     
}

That's all on the difference between synchronized method and block in Java. Favoring
synchronized block over method is one of the Java best practices to follow as it reduces scope
of lock and improves performance. On the other hand, using synchronized method are rather
easy but it also creates bugs when you mix non static and static synchronized methods, as both
of them are locked on different monitors and if you use them to synchronize access of shared
resource, it will most likely break.

Deadlock Prevention
 Lock Ordering
 Lock Timeout
 Deadlock Detection

In some situations it is possible to prevent deadlocks. I'll describe three


techniques in this text:

1. Lock Ordering
2. Lock Timeout
3. Deadlock Detection

Lock Ordering
Deadlock occurs when multiple threads need the same locks but obtain them
in different order.

If you make sure that all locks are always taken in the same order by any
thread, deadlocks cannot occur. Look at this example:
Thread 1:

lock A
lock B

Thread 2:

wait for A
lock C (when A locked)

Thread 3:

wait for A
wait for B
wait for C

If a thread, like Thread 3, needs several locks, it must take them in the
decided order. It cannot take a lock later in the sequence until it has obtained
the earlier locks.

For instance, neither Thread 2 or Thread 3 can lock C until they have locked A
first. Since Thread 1 holds lock A, Thread 2 and 3 must first wait until lock A is
unlocked. Then they must succeed in locking A, before they can attempt to
lock B or C.

Lock ordering is a simple yet effective deadlock prevention mechanism.


However, it can only be used if you know about all locks needed ahead of
taking any of the locks. This is not always the case.

Lock Timeout
Another deadlock prevention mechanism is to put a timeout on lock attempts
meaning a thread trying to obtain a lock will only try for so long before giving
up. If a thread does not succeed in taking all necessary locks within the given
timeout, it will backup, free all locks taken, wait for a random amount of time
and then retry. The random amount of time waited serves to give other
threads trying to take the same locks a chance to take all locks, and thus let
the application continue running without locking.

Here is an example of two threads trying to take the same two locks in
different order, where the threads back up and retry:
Thread 1 locks A
Thread 2 locks B

Thread 1 attempts to lock B but is blocked


Thread 2 attempts to lock A but is blocked

Thread 1's lock attempt on B times out


Thread 1 backs up and releases A as well
Thread 1 waits randomly (e.g. 257 millis) before retrying.

Thread 2's lock attempt on A times out


Thread 2 backs up and releases B as well
Thread 2 waits randomly (e.g. 43 millis) before retrying.

In the above example Thread 2 will retry taking the locks about 200 millis
before Thread 1 and will therefore likely succeed at taking both locks. Thread
1 will then wait already trying to take lock A. When Thread 2 finishes, Thread
1 will be able to take both locks too (unless Thread 2 or another thread takes
the locks in between).

An issue to keep in mind is, that just because a lock times out it does not
necessarily mean that the threads had deadlocked. It could also just mean
that the thread holding the lock (causing the other thread to time out) takes a
long time to complete its task.

Additionally, if enough threads compete for the same resources they still risk
trying to take the threads at the same time again and again, even if timing out
and backing up. This may not occur with 2 threads each waiting between 0
and 500 millis before retrying, but with 10 or 20 threads the situation is
different. Then the likeliness of two threads waiting the same time before
retrying (or close enough to cause problems) is a lot higher.

A problem with the lock timeout mechanism is that it is not possible to set a
timeout for entering a synchronized block in Java. You will have to create a
custom lock class or use one of the Java 5 concurrency constructs in the
java.util.concurrency package. Writing custom locks isn't difficult but it is
outside the scope of this text. Later texts in the Java concurrency trails will
cover custom locks.

Deadlock Detection
Deadlock detection is a heavier deadlock prevention mechanism aimed at
cases in which lock ordering isn't possible, and lock timeout isn't feasible.
Every time a thread takes a lock it is noted in a data structure (map, graph
etc.) of threads and locks. Additionally, whenever a thread requests a lock
this is also noted in this data structure.

When a thread requests a lock but the request is denied, the thread can
traverse the lock graph to check for deadlocks. For instance, if a Thread A
requests lock 7, but lock 7 is held by Thread B, then Thread A can check if
Thread B has requested any of the locks Thread A holds (if any). If Thread B
has requested so, a deadlock has occurred (Thread A having taken lock 1,
requesting lock 7, Thread B having taken lock 7, requesting lock 1).

Of course a deadlock scenario may be a lot more complicated than two


threads holding each others locks. Thread A may wait for Thread B, Thread B
waits for Thread C, Thread C waits for Thread D, and Thread D waits for
Thread A. In order for Thread A to detect a deadlock it must transitively
examine all requested locks by Thread B. From Thread B's requested locks
Thread A will get to Thread C, and then to Thread D, from which it finds one of
the locks Thread A itself is holding. Then it knows a deadlock has occurred.

volatile

The Java volatile keyword is used to mark a Java variable as "being


stored in main memory". More precisely that means, that every read of a
volatile variable will be read from the computer's main memory, and not from
the CPU cache, and that every write to a volatile variable will be written to
main memory, and not just to the CPU cache.

Actually, since Java 5 the volatile keyword guarantees more than just


that volatile variables are written to and read from main memory. I will explain
that in the following sections.

Variable Visibility Problems


The Java volatile keyword guarantees visibility of changes to variables
across threads. This may sound a bit abstract, so let me elaborate.
In a multithreaded application where the threads operate on non-volatile
variables, each thread may copy variables from main memory into a CPU
cache while working on them, for performance reasons. If your computer
contains more than one CPU, each thread may run on a different CPU. That
means, that each thread may copy the variables into the CPU cache of
different CPUs. This is illustrated here:

With non-volatile variables there are no guarantees about when the Java
Virtual Machine (JVM) reads data from main memory into CPU caches, or
writes data from CPU caches to main memory. This can cause several
problems which I will explain in the following sections.

Imagine a situation in which two or more threads have access to a shared


object which contains a counter variable declared like this:

public class SharedObject {

public int counter = 0;

}
Imagine too, that only Thread 1 increments the counter variable, but both
Thread 1 and Thread 2 may read the counter variable from time to time.

If the counter variable is not declared volatile there is no guarantee


about when the value of the counter variable is written from the CPU
cache back to main memory. This means, that the counter variable value
in the CPU cache may not be the same as in main memory. This situation is
illustrated here:

The problem with threads not seeing the latest value of a variable because it
has not yet been written back to main memory by another thread, is called a
"visibility" problem. The updates of one thread are not visible to other threads.

The Java volatile Visibility Guarantee


The Java volatile keyword is intended to address variable visibility
problems. By declaring the counter variable volatile all writes to
the counter variable will be written back to main memory immediately.
Also, all reads of the counter variable will be read directly from main
memory.

Here is how the volatile declaration of the counter variable looks:

public class SharedObject {

public volatile int counter = 0;

Declaring a variable volatile thus guarantees the visibility for other


threads of writes to that variable.

In the scenario given above, where one thread (T1) modifies the counter, and
another thread (T2) reads the counter (but never modifies it), declaring
the counter variable volatile is enough to guarantee visibility for T2 of
writes to the counter variable.

If, however, both T1 and T2 were incrementing the counter variable, then


declaring the counter variable volatile would not have been enough.
More on that later.

Full volatile Visibility Guarantee

Actually, the visibility guarantee of Java volatile goes beyond


the volatile variable itself. The visibility guarantee is as follows:

 If Thread A writes to a volatile variable and Thread B


subsequently reads the same volatile variable, then all variables visible
to Thread A before writing the volatile variable, will also be visible to
Thread B after it has read the volatile variable.
 If Thread A reads a volatile variable, then all all variables visible to
Thread A when reading the volatile variable will also be re-read
from main memory.

Let me illustrate that with a code example:

public class MyClass {


private int years;
private int months
private volatile int days;

public void update(int years, int months, int days){


this.years = years;
this.months = months;
this.days = days;
}
}

The udpate() method writes three variables, of which only days is


volatile.

The full volatile visibility guarantee means, that when a value is written


to days, then all variables visible to the thread are also written to main
memory. That means, that when a value is written to days, the values
of years and months are also written to main memory.

When reading the values of years, months and days you could do it like


this:

public class MyClass {


private int years;
private int months
private volatile int days;

public int totalDays() {


int total = this.days;
total += months * 30;
total += years * 365;
return total;
}

public void update(int years, int months, int days){


this.years = years;
this.months = months;
this.days = days;
}
}

Notice the totalDays() method starts by reading the value of days into


the total variable. When reading the value of days, the values
of months and years are also read into main memory. Therefore you are
guaranteed to see the latest values of days, months and years with the
above read sequence.

Locks in Java
 A Simple Lock
 Lock Reentrance
 Lock Fairness
 Calling unlock() From a finally-clause

A lock is a thread synchronization mechanism like synchronized blocks except


locks can be more sophisticated than Java's synchronized blocks. Locks (and
other more advanced synchronization mechanisms) are created using
synchronized blocks, so it is not like we can get totally rid of
the synchronized keyword.

From Java 5 the package java.util.concurrent.locks contains


several lock implementations, so you may not have to implement your own
locks. But you will still need to know how to use them, and it can still be useful
to know the theory behind their implementation. For more details, see my
tutorial on the java.util.concurrent.locks.Lock interface.

A Simple Lock
Let's start out by looking at a synchronized block of Java code:

public class Counter{

private int count = 0;

public int inc(){


synchronized(this){
return ++count;
}
}
}
Notice the synchronized(this) block in the inc() method. This
block makes sure that only one thread can execute the return +
+count at a time. The code in the synchronized block could have been
more advanced, but the simple ++count suffices to get the point across.

The Counter class could have been written like this instead, using


a Lock instead of a synchronized block:

public class Counter{

private Lock lock = new Lock();


private int count = 0;

public int inc(){


lock.lock();
int newCount = ++count;
lock.unlock();
return newCount;
}
}

The lock() method locks the Lock instance so that all threads


calling lock() are blocked until unlock() is executed.

Here is a simple Lock implementation:

public class Lock{

private boolean isLocked = false;

public synchronized void lock()


throws InterruptedException{
while(isLocked){
wait();
}
isLocked = true;
}

public synchronized void unlock(){


isLocked = false;
notify();
}
}

Notice the while(isLocked) loop, which is also called a "spin lock".


Spin locks and the methods wait() and notify() are covered in more
detail in the text Thread Signaling. While isLocked is true, the thread
calling lock() is parked waiting in the wait() call. In case the thread
should return unexpectedly from the wait() call without having received
a notify() call (AKA a Spurious Wakeup) the thread re-checks
the isLocked condition to see if it is safe to proceed or not, rather than just
assume that being awakened means it is safe to proceed. If isLocked is
false, the thread exits the while(isLocked) loop, and
sets isLocked back to true, to lock the Lock instance for other threads
calling lock().

When the thread is done with the code in the critical section (the code
between lock() and unlock()), the thread calls unlock().
Executing unlock() sets isLocked back to false, and notifies
(awakens) one of the threads waiting in the wait() call in
the lock() method, if any.

Lock Reentrance
Synchronized blocks in Java are reentrant. This means, that if a Java thread
enters a synchronized block of code, and thereby take the lock on the monitor
object the block is synchronized on, the thread can enter other Java code
blocks synchronized on the same monitor object. Here is an example:

public class Reentrant{

public synchronized outer(){


inner();
}

public synchronized inner(){


//do something
}
}

Notice how both outer() and inner() are declared synchronized,


which in Java is equivalent to a synchronized(this) block. If a thread
calls outer() there is no problem calling inner() from inside outer(),
since both methods (or blocks) are synchronized on the same monitor object
("this"). If a thread already holds the lock on a monitor object, it has access to
all blocks synchronized on the same monitor object. This is called reentrance.
The thread can reenter any block of code for which it already holds the lock.

The lock implementation shown earlier is not reentrant. If we rewrite


the Reentrant class like below, the thread calling outer() will be
blocked inside the lock.lock() in the inner() method.

public class Reentrant2{

Lock lock = new Lock();

public outer(){
lock.lock();
inner();
lock.unlock();
}

public synchronized inner(){


lock.lock();
//do something
lock.unlock();
}
}

A thread calling outer() will first lock the Lock instance. Then it will


call inner(). Inside the inner() method the thread will again try to lock
the Lock instance. This will fail (meaning the thread will be blocked), since
the Lock instance was locked already in the outer() method.

The reason the thread will be blocked the second time it


calls lock() without having called unlock() in between, is apparent
when we look at the lock() implementation:

public class Lock{

boolean isLocked = false;

public synchronized void lock()


throws InterruptedException{
while(isLocked){
wait();
}
isLocked = true;
}

...
}

It is the condition inside the while loop (spin lock) that determines if a thread is
allowed to exit the lock() method or not. Currently the condition is
that isLocked must be false for this to be allowed, regardless of what
thread locked it.

To make the Lock class reentrant we need to make a small change:

public class Lock{

boolean isLocked = false;


Thread lockedBy = null;
int lockedCount = 0;

public synchronized void lock()


throws InterruptedException{
Thread callingThread = Thread.currentThread();
while(isLocked && lockedBy != callingThread){
wait();
}
isLocked = true;
lockedCount++;
lockedBy = callingThread;
}

public synchronized void unlock(){


if(Thread.curentThread() == this.lockedBy){
lockedCount--;

if(lockedCount == 0){
isLocked = false;
notify();
}
}
}

...
}

Notice how the while loop (spin lock) now also takes the thread that locked
the Lock instance into consideration. If either the lock is unlocked
(isLocked = false) or the calling thread is the thread that locked
the Lock instance, the while loop will not execute, and the thread
calling lock() will be allowed to exit the method.
Additionally, we need to count the number of times the lock has been locked
by the same thread. Otherwise, a single call to unlock() will unlock the
lock, even if the lock has been locked multiple times. We don't want the lock to
be unlocked until the thread that locked it, has executed the same amount
of unlock() calls as lock() calls.

The Lock class is now reentrant.

Lock Fairness
Java's synchronized blocks makes no guarantees about the sequence in
which threads trying to enter them are granted access. Therefore, if many
threads are constantly competing for access to the same synchronized block,
there is a risk that one or more of the threads are never granted access - that
access is always granted to other threads. This is called starvation. To avoid
this a Lock should be fair. Since the Lock implementations shown in this
text uses synchronized blocks internally, they do not guarantee fairness.
Starvation and fairness are discussed in more detail in the text Starvation
and Fairness.

Calling unlock() From a finally-clause


When guarding a critical section with a Lock, and the critical section may
throw exceptions, it is important to call the unlock() method from inside
a finally-clause. Doing so makes sure that the Lock is unlocked so
other threads can lock it. Here is an example:

lock.lock();
try{
//do critical section code, which may throw exception
} finally {
lock.unlock();
}

This little construct makes sure that the Lock is unlocked in case an


exception is thrown from the code in the critical section. If unlock() was
not called from inside a finally-clause, and an exception was thrown from
the critical section, the Lock would remain locked forever, causing all
threads calling lock() on that Lock instance to halt indefinatel
Thread Signaling
 Signaling via Shared Objects
 Busy Wait
 wait(), notify() and notifyAll()
 Missed Signals
 Spurious Wakeups
 Multiple Threads Waiting for the Same Signals
 Don't call wait() on constant String's or global objects

The purpose of thread signaling is to enable threads to send signals to each


other. Additionally, thread signaling enables threads to wait for signals from
other threads. For instance, a thread B might wait for a signal from thread A
indicating that data is ready to be processed.

Signaling via Shared Objects


A simple way for threads to send signals to each other is by setting the signal
values in some shared object variable. Thread A may set the boolean member
variable hasDataToProcess to true from inside a synchronized block, and
thread B may read the hasDataToProcess member variable, also inside a
synchronized block. Here is a simple example of an object that can hold such
a signal, and provide methods to set and check it:

public class MySignal{

protected boolean hasDataToProcess = false;

public synchronized boolean hasDataToProcess(){


return this.hasDataToProcess;
}

public synchronized void setHasDataToProcess(boolean hasData){


this.hasDataToProcess = hasData;
}

}
Thread A and B must have a reference to a shared MySignal instance for the
signaling to work. If thread A and B has references to different MySignal
instance, they will not detect each others signals. The data to be processed
can be located in a shared buffer separate from the MySignal instance.

Busy Wait
Thread B which is to process the data is waiting for data to become available
for processing. In other words, it is waiting for a signal from thread A which
causes hasDataToProcess() to return true. Here is the loop that thread B is
running in, while waiting for this signal:

protected MySignal sharedSignal = ...

...

while(!sharedSignal.hasDataToProcess()){
//do nothing... busy waiting
}

Notice how the while loop keeps executing until hasDataToProcess() returns
true. This is called busy waiting. The thread is busy while waiting.

wait(), notify() and notifyAll()


Busy waiting is not a very efficient utilization of the CPU in the computer
running the waiting thread, except if the average waiting time is very small.
Else, it would be smarter if the waiting thread could somehow sleep or
become inactive until it receives the signal it is waiting for.

Java has a builtin wait mechanism that enable threads to become inactive
while waiting for signals. The class java.lang.Object defines three methods,
wait(), notify(), and notifyAll(), to facilitate this.

A thread that calls wait() on any object becomes inactive until another thread
calls notify() on that object. In order to call either wait() or notify the calling
thread must first obtain the lock on that object. In other words, the calling
thread must call wait() or notify() from inside a synchronized block. Here is a
modified version of MySignal called MyWaitNotify that uses wait() and notify().
public class MonitorObject{
}

public class MyWaitNotify{

MonitorObject myMonitorObject = new MonitorObject();

public void doWait(){


synchronized(myMonitorObject){
try{
myMonitorObject.wait();
} catch(InterruptedException e){...}
}
}

public void doNotify(){


synchronized(myMonitorObject){
myMonitorObject.notify();
}
}
}

The waiting thread would call doWait(), and the notifying thread would call
doNotify(). When a thread calls notify() on an object, one of the threads
waiting on that object are awakened and allowed to execute. There is also a
notifyAll() method that will wake all threads waiting on a given object.

As you can see both the waiting and notifying thread calls wait() and notify()
from within a synchronized block. This is mandatory! A thread cannot call
wait(), notify() or notifyAll() without holding the lock on the object the method is
called on. If it does, an IllegalMonitorStateException is thrown.

But, how is this possible? Wouldn't the waiting thread keep the lock on the
monitor object (myMonitorObject) as long as it is executing inside a
synchronized block? Will the waiting thread not block the notifying thread from
ever entering the synchronized block in doNotify()? The answer is no. Once a
thread calls wait() it releases the lock it holds on the monitor object. This
allows other threads to call wait() or notify() too, since these methods must be
called from inside a synchronized block.

Once a thread is awakened it cannot exit the wait() call until the thread calling
notify() has left its synchronized block. In other words: The awakened thread
must reobtain the lock on the monitor object before it can exit the wait() call,
because the wait call is nested inside a synchronized block. If multiple threads
are awakened using notifyAll() only one awakened thread at a time can exit
the wait() method, since each thread must obtain the lock on the monitor
object in turn before exiting wait().

Missed Signals
The methods notify() and notifyAll() do not save the method calls to them in
case no threads are waiting when they are called. The notify signal is then just
lost. Therefore, if a thread calls notify() before the thread to signal has called
wait(), the signal will be missed by the waiting thread. This may or may not be
a problem, but in some cases this may result in the waiting thread waiting
forever, never waking up, because the signal to wake up was missed.

To avoid losing signals they should be stored inside the signal class. In the
MyWaitNotify example the notify signal should be stored in a member variable
inside the MyWaitNotify instance. Here is a modified version of MyWaitNotify
that does this:

public class MyWaitNotify2{

MonitorObject myMonitorObject = new MonitorObject();


boolean wasSignalled = false;

public void doWait(){


synchronized(myMonitorObject){
if(!wasSignalled){
try{
myMonitorObject.wait();
} catch(InterruptedException e){...}
}
//clear signal and continue running.
wasSignalled = false;
}
}

public void doNotify(){


synchronized(myMonitorObject){
wasSignalled = true;
myMonitorObject.notify();
}
}
}

Notice how the doNotify() method now sets the wasSignalled variable to true
before calling notify(). Also, notice how the doWait() method now checks the
wasSignalled variable before calling wait(). In fact it only calls wait() if no
signal was received in between the previous doWait() call and this.
public static void main(String args[]) { 

List<Integer> withDupes = Arrays.asList(10, 10, 20, 20, 30, 30, 40, 50); 

System.out.println("List with duplicates: " + withDupes); 

List<Integer> withoutDupes = withDupes.stream() .distinct() .collect(Collectors.toList()); 

System.out.println("List without duplicates: " + withoutDupes);

What is the difference between a Spring singleton and a Java


singeleton(design pattern)?

The Java singleton is scoped by the Java class loader, the Spring singleton is scoped by
the container context.

Which basically means that, in Java, you can be sure a singleton is a truly a singleton only
within the context of the class loader which loaded it. Other class loaders should be capable
of creating another instance of it (provided the class loaders are not in the same class
loader hierarchy), despite of all your efforts in code to try to prevent it.

In Spring, if you could load your singleton class in two different contexts and then again we
can break the singleton concept.

So, in summary, Java considers something a singleton if it cannot create more than one
instance of that class within a given class loader, whereas Spring would consider something
a singleton if it cannot create more than one instance of a class within a given
container/context.

Spring bean life cycle:

1. Bean life cycle


When container starts – a Spring bean needs to be instantiated, based on Java or XML
bean definition. It may also be required to perform some post-initialization steps to get
it into a usable state. Same bean life cycle is for  spring boot  applications as well.
After that, when the bean is no longer required, it will be removed from the IoC
container.

Spring bean factory is responsible for managing the life cycle of beans created through
spring container.

1.1. Life cycle callbacks

Spring bean factory controls the creation and destruction of beans. To execute some
custom code, it provides the call back methods which can be categorized broadly in two
groups:

 Post-initialization call back methods


 Pre-destruction call back methods

1.1. Life cycle in diagram


2. Life cycle callback methods
Spring framework provides following 4 ways for controlling life cycle events of a
bean:

1. InitializingBean and DisposableBean callback interfaces
2. *Aware interfaces for specific behavior
3. Custom init() and destroy() methods in bean configuration file
4. @PostConstruct and @PreDestroy annotations

2.1. InitializingBean and DisposableBean

The org.springframework.beans.factory.InitializingBean interface allows a bean to


perform initialization work after all necessary properties on the bean have been set by
the container.

The InitializingBean interface specifies a single method:

InitializingBean.java
void afterPropertiesSet() throws Exception;

This is not a preferrable way to initialize the bean because it tightly couple your bean
class with spring container. A better approach is to use “init-method” attribute in bean
definition in applicationContext.xml file.

Similarly, implementing the org.springframework.beans.factory.DisposableBean interface


allows a bean to get a callback when the container containing it is destroyed.

The DisposableBean interface specifies a single method:

DisposableBean.java
void destroy() throws Exception;

 
A sample bean implementing above interfaces would look like this:

 
package com.howtodoinjava.task;
 
import org.springframework.beans.factory.DisposableBean;

import org.springframework.beans.factory.InitializingBean;

 
public class DemoBean implements InitializingBean, DisposableBean

    //Other bean attributes and methods

     

    @Override

    public void afterPropertiesSet() throws Exception

    {

        //Bean initialization code

    }

     

    @Override

    public void destroy() throws Exception

    {

        //Bean destruction code

    }

2.2. *Aware interfaces for specific behavior

Spring offers a range of *Aware interfaces that allow beans to indicate to the container
that they require a certain infrastructure dependency. Each interface will require you to
implement a method to inject the dependency in bean.

These interfaces can be summarized as :

AWARE INTERFACE METHOD TO OVERRIDE PU

ApplicationContextAware void setApplicationContext (ApplicationContext Interface to be


any object tha

notified of
applicationContext) throws BeansException;
the Applicati

runs in.

Set
ApplicationEventPublisherA void setApplicationEventPublisher (ApplicationEvent
the Applicati
ware Publisher applicationEventPublisher);
r that this obje

Callback that s
BeanClassLoaderAware void setBeanClassLoader (ClassLoader classLoader);
class loader to

void setBeanFactory (BeanFactory beanFactory) throws Callback that s


BeanFactoryAware
BeansException; factory to a be

Set the name o

BeanNameAware void setBeanName(String name); bean factory th

bean.

void setBootstrapContext (BootstrapContext Set the Bootstr


BootstrapContextAware
bootstrapContext); object runs in.
Set the LoadTi
void setLoadTimeWeaver (LoadTimeWeaver
LoadTimeWeaverAware object’s contai
loadTimeWeaver);
ApplicationCon

void setMessageSource (MessageSource Set the Messag


MessageSourceAware
messageSource); object runs in.

Set the Notific


void setNotificationPublisher (NotificationPublisher
NotificationPublisherAware instance for th
notificationPublisher);
resource instan

Set the Portlet


PortletConfigAware void setPortletConfig (PortletConfig portletConfig);
runs in.

Set the Portlet


PortletContextAware void setPortletContext (PortletContext portletContext);
object runs in.

void setResourceLoader (ResourceLoader Set the Resour


ResourceLoaderAware
resourceLoader); object runs in.

ServletConfigAware void setServletConfig (ServletConfig servletConfig); Set the Servlet


object runs in.

Set the Servlet


ServletContextAware void setServletContext (ServletContext servletContext);
object runs in.

Java program to show usage of aware interfaces to control string bean life cycle.

DemoBean.java
package com.howtodoinjava.task;

 
import org.springframework.beans.BeansException;

import org.springframework.beans.factory.BeanClassLoaderAware;

import org.springframework.beans.factory.BeanFactory;

import org.springframework.beans.factory.BeanFactoryAware;

import org.springframework.beans.factory.BeanNameAware;

import org.springframework.context.ApplicationContext;

import org.springframework.context.ApplicationContextAware;

import org.springframework.context.ApplicationEventPublisher;

import org.springframework.context.ApplicationEventPublisherAware;

import org.springframework.context.MessageSource;

import org.springframework.context.MessageSourceAware;

import org.springframework.context.ResourceLoaderAware;

import org.springframework.context.weaving.LoadTimeWeaverAware;

import org.springframework.core.io.ResourceLoader;

import org.springframework.instrument.classloading.LoadTimeWeaver;

import org.springframework.jmx.export.notification.NotificationPublisher;

import org.springframework.jmx.export.notification.NotificationPublisherAware;

 
public class DemoBean implements ApplicationContextAware,
        ApplicationEventPublisherAware, BeanClassLoaderAware, BeanFactoryAware,

        BeanNameAware, LoadTimeWeaverAware, MessageSourceAware,

        NotificationPublisherAware, ResourceLoaderAware

    @Override

    public void setResourceLoader(ResourceLoader arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setNotificationPublisher(NotificationPublisher arg0) {

        // TODO Auto-generated method stub

 
    }

 
    @Override

    public void setMessageSource(MessageSource arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setLoadTimeWeaver(LoadTimeWeaver arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setBeanName(String arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override
    public void setBeanFactory(BeanFactory arg0) throws BeansException {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setBeanClassLoader(ClassLoader arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setApplicationEventPublisher(ApplicationEventPublisher arg0) {

        // TODO Auto-generated method stub

    }

 
    @Override

    public void setApplicationContext(ApplicationContext arg0)

            throws BeansException {

        // TODO Auto-generated method stub

    }

2.3. Custom init() and destroy() methods

The default init and destroy methods in bean configuration file can be defined in two


ways:

 Bean local definition applicable to a single bean


 Global definition applicable to all beans defined in beans context

2.3.1. Bean local definition

Local definition is given as below.

beans.xml
<beans>

 
    <bean id="demoBean" class="com.howtodoinjava.task.DemoBean"

                    init-method="customInit"

                    destroy-method="customDestroy"></bean>

 
</beans>

2.3.2. Global definition

Where as global definition is given as below. These methods will be invoked for all bean
definitions given under <beans> tag. They are useful when you have a pattern of
defining common method names such as init() and destroy() for all your beans
consistently. This feature helps you in not mentioning the init and destroy method
names for all beans independently.
<beans default-init-method="customInit" default-destroy-method="customDestroy">  

 
        <bean id="demoBean" class="com.howtodoinjava.task.DemoBean"></bean>

 
</beans>

Java program to show methods configured in bean XML configuration file.

DemoBean.java
package com.howtodoinjava.task;

 
public class DemoBean

    public void customInit()

    {

        System.out.println("Method customInit() invoked...");

    }

 
    public void customDestroy()

    {

        System.out.println("Method customDestroy() invoked...");

    }

2.4. @PostConstruct and @PreDestroy

Spring 2.5 onwards, you can use annotations also for specifying life cycle methods
using @PostConstruct and @PreDestroy annotations.

 @PostConstruct annotated method will be invoked after the bean has been


constructed using default constructor and just before it’s instance is returned to
requesting object.
 @PreDestroy annotated method is called just before the bean is about be destroyed
inside bean container.

Java program to show usage of annotation configuration to control using annotations.


package com.howtodoinjava.task;

 
import javax.annotation.PostConstruct;

import javax.annotation.PreDestroy;

 
public class DemoBean

    @PostConstruct

    public void customInit()

    {

        System.out.println("Method customInit() invoked...");

    }

     

    @PreDestroy

    public void customDestroy()


    {

        System.out.println("Method customDestroy() invoked...");

    }

So this is all about spring bean life cycle inside Spring container. Remember given
types of life cycle events, it is a commonly asked spring interview question.

The life cycle of a Spring bean is easy to understand. When a bean is instantiated, it
may be required to perform some initialization to get it into a usable state. Similarly,
when the bean is no longer required and is removed from the container, some cleanup
may be required.
Though, there are lists of the activities that take place behind the scene between the
time of bean Instantiation and its destruction, this chapter will discuss only two
important bean life cycle callback methods, which are required at the time of bean
initialization and its destruction.
To define setup and teardown for a bean, we simply declare the <bean>
with initmethod and/or destroy-method parameters. The init-method attribute
specifies a method that is to be called on the bean immediately upon instantiation.
Similarly, destroymethod specifies a method that is called just before a bean is
removed from the container.

Initialization callbacks
The org.springframework.beans.factory.InitializingBean interface specifies a single
method −
void afterPropertiesSet() throws Exception;
Thus, you can simply implement the above interface and initialization work can be
done inside afterPropertiesSet() method as follows −
public class ExampleBean implements InitializingBean {
public void afterPropertiesSet() {
// do some initialization work
}
}

In the case of XML-based configuration metadata, you can use the init-


method attribute to specify the name of the method that has a void no-argument
signature. For example −
<bean id = "exampleBean" class = "examples.ExampleBean" init-method
= "init"/>
Following is the class definition −
public class ExampleBean {
public void init() {
// do some initialization work
}
}

Destruction callbacks
The org.springframework.beans.factory.DisposableBean interface specifies a single
method −
void destroy() throws Exception;
Thus, you can simply implement the above interface and finalization work can be done
inside destroy() method as follows −
public class ExampleBean implements DisposableBean {
public void destroy() {
// do some destruction work
}
}

In the case of XML-based configuration metadata, you can use the destroy-


method attribute to specify the name of the method that has a void no-argument
signature. For example −
<bean id = "exampleBean" class = "examples.ExampleBean" destroy-
method = "destroy"/>
Following is the class definition −
public class ExampleBean {
public void destroy() {
// do some destruction work
}
}

If you are using Spring's IoC container in a non-web application environment; for
example, in a rich client desktop environment, you register a shutdown hook with the
JVM. Doing so ensures a graceful shutdown and calls the relevant destroy methods on
your singleton beans so that all resources are released.
It is recommended that you do not use the InitializingBean or DisposableBean
callbacks, because XML configuration gives much flexibility in terms of naming your
method.

Example
Let us have a working Eclipse IDE in place and take the following steps to create a
Spring application −

Step Description
s

1 Create a project with a name SpringExample and create a package com.tutorialspoint under


the src folder in the created project.

2 Add required Spring libraries using Add External JARs option as explained in the Spring Hello World
Example chapter.

3 Create Java classes HelloWorld and MainApp under the com.tutorialspoint package.

4 Create Beans configuration file Beans.xml under the src folder.

5 The final step is to create the content of all the Java files and Bean Configuration file and run the
application as explained below.

Here is the content of HelloWorld.java file −


package com.tutorialspoint;

public class HelloWorld {


private String message;

public void setMessage(String message){


this.message = message;
}
public void getMessage(){
System.out.println("Your Message : " + message);
}
public void init(){
System.out.println("Bean is going through init.");
}
public void destroy() {
System.out.println("Bean will destroy now.");
}
}

Following is the content of the MainApp.java file. Here you need to register a


shutdown hook registerShutdownHook() method that is declared on the
AbstractApplicationContext class. This will ensure a graceful shutdown and call the
relevant destroy methods.
package com.tutorialspoint;

import
org.springframework.context.support.AbstractApplicationContext;
import
org.springframework.context.support.ClassPathXmlApplicationContext;

public class MainApp {


public static void main(String[] args) {
AbstractApplicationContext context = new
ClassPathXmlApplicationContext("Beans.xml");

HelloWorld obj = (HelloWorld) context.getBean("helloWorld");


obj.getMessage();
context.registerShutdownHook();
}
}

Following is the configuration file Beans.xml required for init and destroy methods −


<?xml version = "1.0" encoding = "UTF-8"?>

<beans xmlns = "http://www.springframework.org/schema/beans"


xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation =
"http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-
3.0.xsd">

<bean id = "helloWorld" class = "com.tutorialspoint.HelloWorld"


init-method = "init"
destroy-method = "destroy">
<property name = "message" value = "Hello World!"/>
</bean>

</beans>

Once you are done creating the source and bean configuration files, let us run the
application. If everything is fine with your application, it will print the following message

Bean is going through init.
Your Message : Hello World!
Bean will destroy now.

Default initialization and destroy methods


If you have too many beans having initialization and/or destroy methods with the same
name, you don't need to declare init-method and destroy-method on each individual
bean. Instead, the framework provides the flexibility to configure such situation
using default-init-method and default-destroy-method attributes on the <beans>
element as follows −
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation =
"http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-
3.0.xsd"
default-init-method = "init"
default-destroy-method = "destroy">

<bean id = "..." class = "...">


<!-- collaborators and configuration for this bean go here
-->
</bean>

</beans>

You might also like