If You Have Spring Boot Rest Services Would You Use A Singleton and Why?
If You Have Spring Boot Rest Services Would You Use A Singleton and Why?
If You Have Spring Boot Rest Services Would You Use A Singleton and Why?
Yes,
REST is meant to be stateless. In other words, each request contains all the information it
needs for the server to handle it. Knowing that, it's pointless for the server
(or @Controller) to keep any information after it's finished handling the request in instance
fields and the like. Therefore, a singleton is the way to go.
Controllers are just beans, they can be singleton or prototype, it depends on what you are
trying to do. If you want stateless use singleton, by default they are singleton.
Spring beans are, by default, singleton scoped. Since component scanning
a @Controller annotated class simply generates a bean, that bean will be singleton scoped.
2. REST APIs
Application programming interfaces (APIs) provide the platform and medium for applications to
talk to and understand each other. APIs specify the way information passed across platforms is
structured so that applications can exchange data and information.
REST is an API architecture style. It stands for representational state transfer. REST specifies
how data is presented to a client in a format that is convenient for the client. Data exchanges
happen either in JSON or XML format.
REST is an API architecture style. It stands for representational state transfer. REST specifies
how data is presented to a client in a format that is convenient for the client. Data exchanges
happen either in JSON or XML format. However, many APIs today return JSON data, so we will
focus on that in this guide.
The API specifies a set of rules for one application to interact with another. Many APIs have
proper documentation that also describes the nature and structure of the response they send
when you make a request. They also specify the necessary information a requesting application
needs to provide to make a successful request to the API.
In effect, RESTful APIs function just about the same way standard TCP/IP requests function,
except there are no clients and servers here, but just two applications talking to each other.
There are a number of patterns for designing APIs. These patterns have history, different
requirements and create different experiences for users. These designs are somehow
interconnected with each other, so we will see many places where they are very similar.
Understanding them will help you in making a decision of which to use to address your specific
issues.
REST
REST is truly a “web services” API putting it on the opposite side of SOAP (probably why you
will see a dozen comparisons on them without looking too much). REST APIs are based on
URIs (uniform resource identifier) and the HTTP protocol. REST APIs can exchange data in
either JSON or XML format, although many REST APIs send data as JSON.
When building a system with minimal security considerations but strong speed requirements,
REST is an excellent choice. RESTful APIs have fewer security requirements, boost browser
client compatibility, discoverability, data health, and scalability—things that really apply to web
services.
Designing APIs are an entirely different thing. There are best practices to consider to ensure
your APIs are easy for anyone to consume. Let’s list of five of them below:
As an example, imagine you want a service for stock control of fruits. Here is a simple chart of
basic REST APIs you can make for your fruit application
POST
Resource GET read PUT update DELETE
create
Bulk update of
Returns a list of fruits Delete all fruits (you
Create a fruits (you may
/fruits possibly all the fruits you should probably not
new fruit never have to do
have implement this)
this)
Method not
Updates a specific Deletes a specific
/fruits/1004 Returns a specific fruit allowed
fruit fruit
(405)
It’s not good practice to have API endpoints like /getFruits or /delete-
fruits or /createfruits. Instead, take advantage of HTTP verbs for the definitions. This
means I can easily build upon your APIs and have fewer resources to manage. This approach is
less ambiguous and cleaner.
Say you are designing a social network where users have blogs, you should establish
relationships like this:
GET /users/1004/blogs
This should fetch all the user’s blogs. If you want to fetch a specific blog, you can do the
following:
GET /users/1004/blogs/1010
We did a full guide on HTTP status codes explaining their use cases. You need to understand
that it is hard to work with an API that ignores error handling. Returning the wrong status codes,
or returning a stack trace without a helpful message highlighting the error does not help the user
of the API. It does not matter if you have documentation or not.
I know I had mentioned earlier that RESTful APIs have few security considerations, but that
does not mean we should ignore security completely. We will approach API security from two
perspectives: authentication and authorization.
Authentication
This has to do with verifying the identity of the person trying to access your API. This can be
done through multiple ways. At the simplest form, this is providing a username/email and
password combination. For APIs, it will most likely involve using a security token which will
identify a user. More complex scenarios will have key/secret pairs often used to integrate one
application into another.
Authorization
This answers the question of “What can you do”, indicating it comes after authentication.
Authorization comes in handy when you are designing endpoints for accessing data that is
either very specific to a particular person or sensitive information only a predefined set of people
can access.
The very basic need for designing a good key is that “we should be able to
retrieve the value object back from the map without failure“, otherwise no
matter how fancy data structure you build, it will be of no use. To decide that we
have created a good key, we MUST know that “how HashMap works?”. I will leave,
how hashmap works, part on you to read from linked post, but in summary it works
on principle of Hashing.
On runtime, JVM computes hash code for each object and provide it on demand.
When we modify an object’s state, JVM set a flag that object is modified and hash
code must be AGAIN computed. So, next time you call object’s hashCode() method,
JVM recalculate the hash code for that object.
This is the main reason why immutable classes like String, Integer or other wrapper
classes are a good key object candidate. and it is the answer to question why string
is popular hashmap key in java?
1. Don’t provide “setter” methods — methods that modify fields or objects referred to by
fields.
This principle says that for all mutable properties in your class, do not provide
setter methods. Setter methods are meant to change the state of an object and
this is what we want to prevent here.
Apart from your written classes, JDK itself has lots of immutable classes. Given is such a
list of immutable classes in Java.
1. String
2. Wrapper classes such as Integer, Long, Double etc.
3. Immutable collection classes such as Collections.singletonMap() etc.
4. java.lang.StackTraceElement
5. Java enums (ideally they should be)
6. java.util.Locale
7. java.util.UUID
What is Hashing?
Hashing in its simplest form, is a way to assigning a unique code for any variable/object after applying any
formula/algorithm on its properties.
“Hash function should return the same hash code each and every time when the function is applied on
same or equal objects. In other words, two equal objects must produce the same hash code consistently.”
All objects in Java inherit a default implementation of hashCode() function defined in Object class. This
function produces hash code by typically converting the internal address of the object into an integer,
thus producing different hash codes for all different objects.
So, there must be some mechanism in HashMap to store this key-value pair. The answer is
YES. HashMap has an nested static class Entry, which looks like this:
Entry class
static class Entry<K ,V> implements Map.Entry<K, V>
{
final K key;
V value;
Entry<K ,V> next;
final int hash;
...//More code goes here
}
entry array
/**
*/
put() method
/**
* Associates the specified value with the specified key in this map. If the
* map previously contained a mapping for the key, the old value is
* replaced.
* @param key
* @param value
*/
return putForNullKey(value);
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
return null;
Here comes the main part. Now, as we know that two unequal objects can have the same
hash code value, how two different objects will be stored in same array location
called bucket.
2. What if we add the another value object with same key as entered before. Logically,
it should replace the old value. How it is done? Well, after determining the index
position of Entry object, while iterating over linkedist on calculated
index, HashMap calls equals method on key object for each entry object.
All these entry objects in linkedlist will have similar hashcode but equals() method will
test for true equality. If key.equals(k) will be true then both keys are treated as
same key object. This will cause the replacing of value object inside entry object only.
Now we know how key-value pairs are stored in HashMap. Next big question is : what
happens when an object is passed in get() method of HashMap? How the value object is
determined?
Answer we already should know that the way key uniqueness is determined in put() method
, same logic is applied in get() method also. The moment hashmap identify the exact match
for the key object passed as an argument, it simply returns the value object stored in
current Entry object.
get() method
/**
* Returns the value to which the specified key is mapped, or {@code null}
* <p>
*
* </p><p>
* the map contains no mapping for the key; it's also possible that the map
*/
return getForNullKey();
Object k;
return e.value;
}
return null;
Above code is same as put() method till if (e.hash == hash && ((k = e.key) == key ||
key.equals(k))), after this simply value object is returned.
As part of the work for JEP 180, there is a performance improvement for HashMap objects
where there are lots of collisions in the keys by using balanced trees rather than linked lists
to store map entries. The principal idea is that once the number of items in a hash
bucket grows beyond a certain threshold, that bucket will switch from using a
linked list of entries to a balanced tree. In the case of high hash collisions, this will
improve worst-case performance from O(n) to O(log n).
Bins (elements or nodes) of TreeNodes may be traversed and used like any others, but
additionally support faster lookup when overpopulated. However, since the vast majority of
bins in normal use are not overpopulated, checking for the existence of tree bins may be
delayed in the course of table methods.
Tree bins (i.e., bins whose elements are all TreeNodes) are ordered primarily by hashCode,
but in the case of ties, if two elements are of the same “class C implements
Comparable<C>“, type then their compareTo() method is used for ordering.
Because TreeNodes are about twice the size of regular nodes, we use them only when bins
contain enough nodes. And when they become too small (due to removal or resizing) they
are converted back to plain bins (currently: UNTREEIFY_THRESHOLD = 6). In usages
with well-distributed user hashCodes, tree bins are rarely used.
Java Objects
// Constructor
Student()
{
roll_no = last_roll;
last_roll++;
}
@Override
{
return roll_no;
}
{
System.out.println(s);
System.out.println(s.toString());
}
Output :
Student@64
Student@64
{
Class c = obj.getClass();
+ c.getName());
}
Output:
Class of Object obj is : java.lang.String
Note :After loading a .class file, JVM will create an object of the
type java.lang.Class in the Heap area. We can use this class object to get Class
level information. It is widely used in Reflection
5.finalize() method : This method is called just before an object is garbage collected. It is
called by the Garbage Collector on an object when garbage collector determines that there
are no more references to the object. We should override finalize() method to dispose
system resources, perform clean-up activities and minimize memory leaks. For example
before destroying Servlet objects web container, always called finalize method to perform
clean-up activities of the session.
Note :finalize method is called just once on an object even though that object is eligible for
garbage collection multiple times.
{
System.out.println(t.hashCode());
t = null;
System.gc();
System.out.println("end");
}
@Override
{
}
Output:
366712642
finalize method called
end
6.clone() : It returns a new object that is exactly the same as this object. For clone() method
refer Clone()
7.The remaining three methods wait(), notify() notifyAll() are related to Concurrency.
Refer Inter-thread Communication in Java for details.
SELECT Id, Salary FROM Employee e WHERE 2=(SELECT COUNT(DISTINCT
Salary) FROM Employee p WHERE e.Salary<=p.Salary)
SELECT Salary FROM (SELECT Salary FROM Employee ORDER BY salary DESC LIMIT 2)
AS Emp ORDER BY salary LIMIT 1;
In this solution, we have first sorted all salaries form Employee table in decreasing order, so that
2 highest salaries come at top of the result set. After that we took just two records by using
LIMIT 2. Again we did the same thing but this time we sort the result set on ascending order, so
that second highest salary comes at top. Now we print that salary by using LIMIT 1. Simple and
easy, right?
SELECT TOP 1 Salary FROM ( SELECT TOP 2 Salary FROM Employee ORDER BY Salary
DESC) AS MyTable ORDER BY Salary ASC;
Here is the output of above query running on Microsoft SQL Server 2014 :
Difference between String literal and New String object in Java
String is a special class in Java API and has so many special behaviors which is not obvious to
many programmers. In order to master Java, first step is to master String class, and one way to
explore is checking what kind of String related questions are asked on Java interviews. Apart from
usual questions like why String is final, or equals vs == operator, one of the most frequently asked
question is what is difference between String literal and String object in Java. For example, what is
the difference between String object created in following two expression :
At high level both are String object, but main difference comes from the point that new() operator
always creates a new String object. Also when you create String using literal they are interned. This
will be much more clear when you compare two String objects created using String literal and new
operator, as shown in below example :
String a = "Java";
String b = "Java";
System.out.println(a == b); // True
Here two different objects are created and they have different references:
String c = new String("Java");
String d = new String("Java");
System.out.println(c == d); // False
Similarly when you compare a String literal with an String object created using new() operator
using == operator, it will return false, as shown below :
String e = "JDK";
String f = new String("JDK");
System.out.println(e == f); // False
In general you should use the string literal notation when possible. It is easier to read and it gives the
compiler a chance to optimize your code. By the way any answer to this question is incomplete until
you explain what is String interning, so let's see that in next section.
That's all about this question, what is difference between String literal and String object in Java.
Always remember that literal Strings are returned from string pool and Java put them in pool if not
stored already. This difference is most obvious, when you compare two String objects using equality
operator (==). That's why it's suggested as always compare two String object
using equals() method and never compare them using == operator, because you never know
which one is coming from pool and which one is created using new() operator. If you know
the difference between string object and string literal, you can also solve questions from Java written
test, which also test this concept. It's something, every Java programmer should know. of temporary
String object in heap space.
Understanding the Java
ConcurrentHashMap
Class Hierarchy
java.lang.Object
java.util.AbstractMap<K,V>
java.util.concurrent.ConcurrentHashMap<K,V>
Type Parameters:
K - the type of keys maintained by this map
V - the type of mapped values
All Implemented Interfaces: Serializable, ConcurrentMap<K,V>,
Map<K,V>
public ConcurrentHashMap()-
Creates a new, empty map with a default initial capacity (16),
load factor (0.75) and concurrencyLevel (16).
Why use Concurrent Hashmap-
We use ConcurrentHashMap when a high level of concurrency is
required. But already SynchronizedMap is present so what
advantages does ConcurrentHashMap have over synchronized
map.Both are thread safe. The major advantage is in case of
synchronizedMap every write operation acquires lock on entire
SynchronizedMap while in case of ConcurrentHashMap the lock
is only on one of the segments.
SynchronizedMap-
ConcurrentHashMap-
Lets see how synchronizedMap behaves in a multithreaded
environment.
Create a class which implements Runnable class takes instance
of Map and insert value in it.
package com.javainuse.helper;
import java.util.Map;
}
}
package com.javainuse.main;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import com.javainuse.helper.MapHelper1;
import com.javainuse.helper.MapHelper2;
import com.javainuse.helper.MapHelper3;
import com.javainuse.helper.MapHelper4;
package com.javainuse.main;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import com.javainuse.helper.MapHelper1;
import com.javainuse.helper.MapHelper2;
import com.javainuse.helper.MapHelper3;
import com.javainuse.helper.MapHelper4;
As you can see the table contains the 16 segments. Out of these
only three are populated and remaining are null. As soon as new
value is to be inserted a new segment will then be populated. (In
previous versions of java, whether populated or not all the
sixteen segments would get initialized, but this has been
improved)
Difference between ConcurrentHashMap and
SynchronizedMap -
The major differences are-
SynchronizedHashMap ConcurrentHashMap
Since at a given time only a single Thread can Multiple Threads can mo
modify the map and block other threads the ConcurrentHashMap. He
performance is comparatively bad. much better.
Yes it is possible that two threads can simultaneously write on the
ConcurrentHashMap. ConcurrentHashMap default implementation allows 16
threads to read and write in parallel.
But in the worst case scenario , when two objects lie in the same segment or
same partition, then parallel write would not be possible.
Interviewer : Why ConcurrentHashMap does not allow null keys and null
values ?
In simple words,
It might be possible that key k might be deleted in between the get(k) and
containsKey(k) calls. As a result , the code will return null as opposed to
KeyNotPresentException (Expected Result if key is not present).
The HashMap was not thread safe and therefore could not be utilized in multi-
threaded applications. The ConcurrentHashMap was introduced to overcome
this shortcoming and also as an alternative to using HashTable and
synchronized Maps for greater performance and uses the standard Hashing
algorithms to generate hash code for storing the key value pairs.For more
difference between HashMap and ConcurrentHashMap check this popular
interview question HashMap vs ConcurrentHashMap in java.
No multiple threads can not read simultaneously from Hashtable. Reason, the
get() method of Hashtable is synchronized. As a result , at a time only one
thread can access the get() method .
It is possible to achieve full concurrency for reads (all the threads read at the
same time) in ConcurrentHashMap by using volatile keyword.
ConcurrentHashMap iterator behaves like fail safe iterator. It will not throw
ConcurrentModificationException . We have already discussed Fail Fast
Iterator vs Fail Safe Iterator.
import java.util.concurrent.ConcurrentHashMap;
import java.util.Iterator;
while (iterator.hasNext())
{
System.out.println(premiumPhone.get(iterator.next()));
premiumPhone.put("Sony", "Xperia Z");
}
Output :
S6
HTC one
iPhone6
ifference between synchronized block and method in Java
Thread
Synchronized block and synchronized methods are two ways to use synchronized keyword in
Java and implement mutual exclusion on critical section of code. Since Java is mainly used to
write multi-threading programs, which present various kinds of thread related issues like thread-
safety, deadlock and race conditions, which plagues into code mainly because of poor
understanding of synchronization mechanism provided by Java programming language. Java
provides inbuilt synchronized and volatile keyword to achieve synchronization in Java.
Main difference between synchronized method and synchronized block is selection of lock on
which critical section is locked.
Synchronized method depending upon whether its a static method or non static locks on
either class level lock or object lock. Class level lock is one for each class and
represented by class literal e.g. Stirng.class. Object level lock is provided by current object
like this instance, You should never mix static and non static synchronized method in Java.
On the other hand synchronized block locks on monitor evaluated by expression provided as
parameter to synchronized block. In next section we will see an example of both synchronized
method and synchronized block to understand this difference better.
Here are Some more differences between synchronized method and block in Java based upon
experience and syntactical rules of synchronized keyword in Java. Though both block and
method can be used to provide highest degree of synchronization in Java, use of synchronized
block over method is considered as better Java coding practices.
1) One significant difference between synchronized method and block is that, Synchronized
block generally reduce the scope of lock. As scope of lock is inversely proportional to
performance, its always better to lock only critical section of code. One of the best example of
using a synchronized block is double checked locking in Singleton pattern where instead of
locking whole getInstance() method we only lock critical section of code which is used to
create Singleton instance. This improves performance drastically because locking is only
required one or two times.
2) Synchronized block provide granular control over lock, as you can use arbitrary any lock to
provide mutual exclusion to critical section code. On the other hand synchronized method
always lock either on current object represented by this keyword or class level lock, if its static
synchronized method.
4) In case of synchronized method, lock is acquired by thread when it enter method and
released when it leaves method, either normally or by throwing Exception. On the other hand in
case of synchronized block, thread acquires lock when they enter synchronized block and
release when they leave synchronized block.
/**
* Java class to demonstrate use of synchronization method and block
in Java
*/
public class SycnronizationExample{
public synchronized void lockedByThis(){
System.out.println(" This synchronized method is locked by
current" instance of object i.e. this");
}
public static synchronized void lockedByClassLock(){
System.out.println("This static synchronized method is locked
by class level lock of this class i.e. SychronizationExample.class");
}
public void lockedBySynchronizedBlock(){
System.err.println("This line is executed without locking");
Object obj = String.class; //class level lock of Stirng class
synchronized(obj){
System.out.println("synchronized block, locked by lock
represented using obj variable");
}
}
}
That's all on the difference between synchronized method and block in Java. Favoring
synchronized block over method is one of the Java best practices to follow as it reduces scope
of lock and improves performance. On the other hand, using synchronized method are rather
easy but it also creates bugs when you mix non static and static synchronized methods, as both
of them are locked on different monitors and if you use them to synchronize access of shared
resource, it will most likely break.
Deadlock Prevention
Lock Ordering
Lock Timeout
Deadlock Detection
1. Lock Ordering
2. Lock Timeout
3. Deadlock Detection
Lock Ordering
Deadlock occurs when multiple threads need the same locks but obtain them
in different order.
If you make sure that all locks are always taken in the same order by any
thread, deadlocks cannot occur. Look at this example:
Thread 1:
lock A
lock B
Thread 2:
wait for A
lock C (when A locked)
Thread 3:
wait for A
wait for B
wait for C
If a thread, like Thread 3, needs several locks, it must take them in the
decided order. It cannot take a lock later in the sequence until it has obtained
the earlier locks.
For instance, neither Thread 2 or Thread 3 can lock C until they have locked A
first. Since Thread 1 holds lock A, Thread 2 and 3 must first wait until lock A is
unlocked. Then they must succeed in locking A, before they can attempt to
lock B or C.
Lock Timeout
Another deadlock prevention mechanism is to put a timeout on lock attempts
meaning a thread trying to obtain a lock will only try for so long before giving
up. If a thread does not succeed in taking all necessary locks within the given
timeout, it will backup, free all locks taken, wait for a random amount of time
and then retry. The random amount of time waited serves to give other
threads trying to take the same locks a chance to take all locks, and thus let
the application continue running without locking.
Here is an example of two threads trying to take the same two locks in
different order, where the threads back up and retry:
Thread 1 locks A
Thread 2 locks B
In the above example Thread 2 will retry taking the locks about 200 millis
before Thread 1 and will therefore likely succeed at taking both locks. Thread
1 will then wait already trying to take lock A. When Thread 2 finishes, Thread
1 will be able to take both locks too (unless Thread 2 or another thread takes
the locks in between).
An issue to keep in mind is, that just because a lock times out it does not
necessarily mean that the threads had deadlocked. It could also just mean
that the thread holding the lock (causing the other thread to time out) takes a
long time to complete its task.
Additionally, if enough threads compete for the same resources they still risk
trying to take the threads at the same time again and again, even if timing out
and backing up. This may not occur with 2 threads each waiting between 0
and 500 millis before retrying, but with 10 or 20 threads the situation is
different. Then the likeliness of two threads waiting the same time before
retrying (or close enough to cause problems) is a lot higher.
A problem with the lock timeout mechanism is that it is not possible to set a
timeout for entering a synchronized block in Java. You will have to create a
custom lock class or use one of the Java 5 concurrency constructs in the
java.util.concurrency package. Writing custom locks isn't difficult but it is
outside the scope of this text. Later texts in the Java concurrency trails will
cover custom locks.
Deadlock Detection
Deadlock detection is a heavier deadlock prevention mechanism aimed at
cases in which lock ordering isn't possible, and lock timeout isn't feasible.
Every time a thread takes a lock it is noted in a data structure (map, graph
etc.) of threads and locks. Additionally, whenever a thread requests a lock
this is also noted in this data structure.
When a thread requests a lock but the request is denied, the thread can
traverse the lock graph to check for deadlocks. For instance, if a Thread A
requests lock 7, but lock 7 is held by Thread B, then Thread A can check if
Thread B has requested any of the locks Thread A holds (if any). If Thread B
has requested so, a deadlock has occurred (Thread A having taken lock 1,
requesting lock 7, Thread B having taken lock 7, requesting lock 1).
volatile
With non-volatile variables there are no guarantees about when the Java
Virtual Machine (JVM) reads data from main memory into CPU caches, or
writes data from CPU caches to main memory. This can cause several
problems which I will explain in the following sections.
}
Imagine too, that only Thread 1 increments the counter variable, but both
Thread 1 and Thread 2 may read the counter variable from time to time.
The problem with threads not seeing the latest value of a variable because it
has not yet been written back to main memory by another thread, is called a
"visibility" problem. The updates of one thread are not visible to other threads.
In the scenario given above, where one thread (T1) modifies the counter, and
another thread (T2) reads the counter (but never modifies it), declaring
the counter variable volatile is enough to guarantee visibility for T2 of
writes to the counter variable.
Locks in Java
A Simple Lock
Lock Reentrance
Lock Fairness
Calling unlock() From a finally-clause
A Simple Lock
Let's start out by looking at a synchronized block of Java code:
Here is a simple Lock implementation:
When the thread is done with the code in the critical section (the code
between lock() and unlock()), the thread calls unlock().
Executing unlock() sets isLocked back to false, and notifies
(awakens) one of the threads waiting in the wait() call in
the lock() method, if any.
Lock Reentrance
Synchronized blocks in Java are reentrant. This means, that if a Java thread
enters a synchronized block of code, and thereby take the lock on the monitor
object the block is synchronized on, the thread can enter other Java code
blocks synchronized on the same monitor object. Here is an example:
public outer(){
lock.lock();
inner();
lock.unlock();
}
...
}
It is the condition inside the while loop (spin lock) that determines if a thread is
allowed to exit the lock() method or not. Currently the condition is
that isLocked must be false for this to be allowed, regardless of what
thread locked it.
if(lockedCount == 0){
isLocked = false;
notify();
}
}
}
...
}
Notice how the while loop (spin lock) now also takes the thread that locked
the Lock instance into consideration. If either the lock is unlocked
(isLocked = false) or the calling thread is the thread that locked
the Lock instance, the while loop will not execute, and the thread
calling lock() will be allowed to exit the method.
Additionally, we need to count the number of times the lock has been locked
by the same thread. Otherwise, a single call to unlock() will unlock the
lock, even if the lock has been locked multiple times. We don't want the lock to
be unlocked until the thread that locked it, has executed the same amount
of unlock() calls as lock() calls.
Lock Fairness
Java's synchronized blocks makes no guarantees about the sequence in
which threads trying to enter them are granted access. Therefore, if many
threads are constantly competing for access to the same synchronized block,
there is a risk that one or more of the threads are never granted access - that
access is always granted to other threads. This is called starvation. To avoid
this a Lock should be fair. Since the Lock implementations shown in this
text uses synchronized blocks internally, they do not guarantee fairness.
Starvation and fairness are discussed in more detail in the text Starvation
and Fairness.
lock.lock();
try{
//do critical section code, which may throw exception
} finally {
lock.unlock();
}
}
Thread A and B must have a reference to a shared MySignal instance for the
signaling to work. If thread A and B has references to different MySignal
instance, they will not detect each others signals. The data to be processed
can be located in a shared buffer separate from the MySignal instance.
Busy Wait
Thread B which is to process the data is waiting for data to become available
for processing. In other words, it is waiting for a signal from thread A which
causes hasDataToProcess() to return true. Here is the loop that thread B is
running in, while waiting for this signal:
...
while(!sharedSignal.hasDataToProcess()){
//do nothing... busy waiting
}
Notice how the while loop keeps executing until hasDataToProcess() returns
true. This is called busy waiting. The thread is busy while waiting.
Java has a builtin wait mechanism that enable threads to become inactive
while waiting for signals. The class java.lang.Object defines three methods,
wait(), notify(), and notifyAll(), to facilitate this.
A thread that calls wait() on any object becomes inactive until another thread
calls notify() on that object. In order to call either wait() or notify the calling
thread must first obtain the lock on that object. In other words, the calling
thread must call wait() or notify() from inside a synchronized block. Here is a
modified version of MySignal called MyWaitNotify that uses wait() and notify().
public class MonitorObject{
}
The waiting thread would call doWait(), and the notifying thread would call
doNotify(). When a thread calls notify() on an object, one of the threads
waiting on that object are awakened and allowed to execute. There is also a
notifyAll() method that will wake all threads waiting on a given object.
As you can see both the waiting and notifying thread calls wait() and notify()
from within a synchronized block. This is mandatory! A thread cannot call
wait(), notify() or notifyAll() without holding the lock on the object the method is
called on. If it does, an IllegalMonitorStateException is thrown.
But, how is this possible? Wouldn't the waiting thread keep the lock on the
monitor object (myMonitorObject) as long as it is executing inside a
synchronized block? Will the waiting thread not block the notifying thread from
ever entering the synchronized block in doNotify()? The answer is no. Once a
thread calls wait() it releases the lock it holds on the monitor object. This
allows other threads to call wait() or notify() too, since these methods must be
called from inside a synchronized block.
Once a thread is awakened it cannot exit the wait() call until the thread calling
notify() has left its synchronized block. In other words: The awakened thread
must reobtain the lock on the monitor object before it can exit the wait() call,
because the wait call is nested inside a synchronized block. If multiple threads
are awakened using notifyAll() only one awakened thread at a time can exit
the wait() method, since each thread must obtain the lock on the monitor
object in turn before exiting wait().
Missed Signals
The methods notify() and notifyAll() do not save the method calls to them in
case no threads are waiting when they are called. The notify signal is then just
lost. Therefore, if a thread calls notify() before the thread to signal has called
wait(), the signal will be missed by the waiting thread. This may or may not be
a problem, but in some cases this may result in the waiting thread waiting
forever, never waking up, because the signal to wake up was missed.
To avoid losing signals they should be stored inside the signal class. In the
MyWaitNotify example the notify signal should be stored in a member variable
inside the MyWaitNotify instance. Here is a modified version of MyWaitNotify
that does this:
Notice how the doNotify() method now sets the wasSignalled variable to true
before calling notify(). Also, notice how the doWait() method now checks the
wasSignalled variable before calling wait(). In fact it only calls wait() if no
signal was received in between the previous doWait() call and this.
public static void main(String args[]) {
List<Integer> withDupes = Arrays.asList(10, 10, 20, 20, 30, 30, 40, 50);
The Java singleton is scoped by the Java class loader, the Spring singleton is scoped by
the container context.
Which basically means that, in Java, you can be sure a singleton is a truly a singleton only
within the context of the class loader which loaded it. Other class loaders should be capable
of creating another instance of it (provided the class loaders are not in the same class
loader hierarchy), despite of all your efforts in code to try to prevent it.
In Spring, if you could load your singleton class in two different contexts and then again we
can break the singleton concept.
So, in summary, Java considers something a singleton if it cannot create more than one
instance of that class within a given class loader, whereas Spring would consider something
a singleton if it cannot create more than one instance of a class within a given
container/context.
Spring bean factory is responsible for managing the life cycle of beans created through
spring container.
Spring bean factory controls the creation and destruction of beans. To execute some
custom code, it provides the call back methods which can be categorized broadly in two
groups:
1. InitializingBean and DisposableBean callback interfaces
2. *Aware interfaces for specific behavior
3. Custom init() and destroy() methods in bean configuration file
4. @PostConstruct and @PreDestroy annotations
InitializingBean.java
void afterPropertiesSet() throws Exception;
This is not a preferrable way to initialize the bean because it tightly couple your bean
class with spring container. A better approach is to use “init-method” attribute in bean
definition in applicationContext.xml file.
DisposableBean.java
void destroy() throws Exception;
A sample bean implementing above interfaces would look like this:
package com.howtodoinjava.task;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
public class DemoBean implements InitializingBean, DisposableBean
@Override
{
}
@Override
{
}
Spring offers a range of *Aware interfaces that allow beans to indicate to the container
that they require a certain infrastructure dependency. Each interface will require you to
implement a method to inject the dependency in bean.
notified of
applicationContext) throws BeansException;
the Applicati
runs in.
Set
ApplicationEventPublisherA void setApplicationEventPublisher (ApplicationEvent
the Applicati
ware Publisher applicationEventPublisher);
r that this obje
Callback that s
BeanClassLoaderAware void setBeanClassLoader (ClassLoader classLoader);
class loader to
bean.
Java program to show usage of aware interfaces to control string bean life cycle.
DemoBean.java
package com.howtodoinjava.task;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanClassLoaderAware;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanNameAware;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.context.ApplicationEventPublisherAware;
import org.springframework.context.MessageSource;
import org.springframework.context.MessageSourceAware;
import org.springframework.context.ResourceLoaderAware;
import org.springframework.context.weaving.LoadTimeWeaverAware;
import org.springframework.core.io.ResourceLoader;
import org.springframework.instrument.classloading.LoadTimeWeaver;
import org.springframework.jmx.export.notification.NotificationPublisher;
import org.springframework.jmx.export.notification.NotificationPublisherAware;
public class DemoBean implements ApplicationContextAware,
ApplicationEventPublisherAware, BeanClassLoaderAware, BeanFactoryAware,
NotificationPublisherAware, ResourceLoaderAware
@Override
}
@Override
}
@Override
}
@Override
}
@Override
}
@Override
public void setBeanFactory(BeanFactory arg0) throws BeansException {
}
@Override
}
@Override
}
@Override
throws BeansException {
}
beans.xml
<beans>
<bean id="demoBean" class="com.howtodoinjava.task.DemoBean"
init-method="customInit"
destroy-method="customDestroy"></bean>
</beans>
Where as global definition is given as below. These methods will be invoked for all bean
definitions given under <beans> tag. They are useful when you have a pattern of
defining common method names such as init() and destroy() for all your beans
consistently. This feature helps you in not mentioning the init and destroy method
names for all beans independently.
<beans default-init-method="customInit" default-destroy-method="customDestroy">
<bean id="demoBean" class="com.howtodoinjava.task.DemoBean"></bean>
</beans>
DemoBean.java
package com.howtodoinjava.task;
public class DemoBean
{
}
public void customDestroy()
{
}
Spring 2.5 onwards, you can use annotations also for specifying life cycle methods
using @PostConstruct and @PreDestroy annotations.
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
public class DemoBean
@PostConstruct
{
}
@PreDestroy
}
So this is all about spring bean life cycle inside Spring container. Remember given
types of life cycle events, it is a commonly asked spring interview question.
The life cycle of a Spring bean is easy to understand. When a bean is instantiated, it
may be required to perform some initialization to get it into a usable state. Similarly,
when the bean is no longer required and is removed from the container, some cleanup
may be required.
Though, there are lists of the activities that take place behind the scene between the
time of bean Instantiation and its destruction, this chapter will discuss only two
important bean life cycle callback methods, which are required at the time of bean
initialization and its destruction.
To define setup and teardown for a bean, we simply declare the <bean>
with initmethod and/or destroy-method parameters. The init-method attribute
specifies a method that is to be called on the bean immediately upon instantiation.
Similarly, destroymethod specifies a method that is called just before a bean is
removed from the container.
Initialization callbacks
The org.springframework.beans.factory.InitializingBean interface specifies a single
method −
void afterPropertiesSet() throws Exception;
Thus, you can simply implement the above interface and initialization work can be
done inside afterPropertiesSet() method as follows −
public class ExampleBean implements InitializingBean {
public void afterPropertiesSet() {
// do some initialization work
}
}
Destruction callbacks
The org.springframework.beans.factory.DisposableBean interface specifies a single
method −
void destroy() throws Exception;
Thus, you can simply implement the above interface and finalization work can be done
inside destroy() method as follows −
public class ExampleBean implements DisposableBean {
public void destroy() {
// do some destruction work
}
}
If you are using Spring's IoC container in a non-web application environment; for
example, in a rich client desktop environment, you register a shutdown hook with the
JVM. Doing so ensures a graceful shutdown and calls the relevant destroy methods on
your singleton beans so that all resources are released.
It is recommended that you do not use the InitializingBean or DisposableBean
callbacks, because XML configuration gives much flexibility in terms of naming your
method.
Example
Let us have a working Eclipse IDE in place and take the following steps to create a
Spring application −
Step Description
s
2 Add required Spring libraries using Add External JARs option as explained in the Spring Hello World
Example chapter.
5 The final step is to create the content of all the Java files and Bean Configuration file and run the
application as explained below.
import
org.springframework.context.support.AbstractApplicationContext;
import
org.springframework.context.support.ClassPathXmlApplicationContext;
</beans>
Once you are done creating the source and bean configuration files, let us run the
application. If everything is fine with your application, it will print the following message
−
Bean is going through init.
Your Message : Hello World!
Bean will destroy now.
</beans>