Interview Questions Archana
Interview Questions Archana
Interview Questions Archana
Survey domain and having good experience in agile process. I have worked for “ Annik Technologies
Services Pvt Ltd” later it is acquired by Capgemini.
As we were following agile methodology, we do have a daily standup meeting, which is conducted by
scrum master. In that, we give updates related to assigned work, means what exactly we are doing, how
much work is done, how much work is pending, if we need any clarification from anyone in the team,
after the call, scrum master will setup the meeting, and we will get the clarifications?
1. Apart from this, I do test case preparation, test case execution, and while executing it, if we get
any defects, will log into JIRA and retest, till it fixed.
2. Attending the Defect triage meeting.
3. Attending the all-sprint related meetings like Sprint grooming session, Sprint planning meeting
and sprint retrospective meeting.
4. After sprint execution, we have sprint retrospective meeting,
5. Sending the daily status, to manager and client, at the end of day.
These are all my roles and responsibilities.
<<Test case: Test case is an idea of test engineer, to test something, based on requirements.
<<Testing Process:
As we are following agile methodology, for every sprint, initially we have ‘Sprint Planning Meeting’, we
will provide estimations, and then we will start writing the test cases for finalized user stories. We are 12
members team, 8 are developers, 4 are testing team. I was the one of senior resources in the project
and we will find out the testing scope and areas to be tested, and the write the test cases. And then we
will have peer review with other team members for the written test cases and, we will have another
review with BA as well, so that if we miss any scenarios, BA will ask us to include missed scenarios. Once
we get the build, we will execute the test cases. If we get any defects, we will raise it in JIRA and will
attend the defect triage calls to prioritize the defects, if no defects, we will skip the call. Once sprint
testing is done, we will do full regression, at the time of scenarios identification, we usually finalize the
scope. We will run regression pack. Once it is done, we will promote the code to UAT env, we will do
sanity and we will inform client to continue their testing. Then we will promote code to pre-prod and
then prod. Then will start next sprint activities parallelly in between.
What is your project?
For all my projects are related to ‘Survey’ domain, all are having similar functionalities…like
Creating a survey, add questions, and choose recipients, all in one interface.
Creating conditional questions, which appear only when user answers the other questions.
Restricting a survey so only specific survey users can take it and send invitations to those users
simultaneously. Alternatively, make the survey as a public survey so that any user can take the
survey, even anonymous users (users who have not logged in to the system).
Setting up a schedule to automatically assign a survey to users and to limit how often the same
user can take a survey.
Customize the look and feel of survey questionnaires.
Save anonymous survey responses.
Converting survey responses to numerical scores and view them on scorecards.
Deactivate a survey for maintenance or to retire it without deleting it.
2. It is used when the requirements are stable, in middle of the development, intermittent changes will
not be allowed.
3. Mostly, all the small and simple applications follow this development cycle.
To overcome this, agile has been introduced. where it is completely dynamic and both DEV and QA will
work parallel to deliver the project. and changes will be allowed during development.
Here developers and testers work together as a team, this team is called agile team (agile team is
nothing but, a collection of people like developers, testers, and manager).
And both development and testing activities are concurrent, not like the waterfall model. Here we are
having SCRUM meeting, which is nothing but daily standup meeting, we also have various meetings like
Sprint grooming session, Sprint planning meeting and Sprint retrospective meeting, by doing all these
practices, we will be having clear communication with all the stakeholders to deliver the quality
applications and allowing the changes according to the business changes.
Sprint grooming session (also referred as backlog refinement, is a recurring event for agile product
development teams. The primary purpose of a backlog grooming session is, to ensure the next sprint
user stories in the product backlog are prepared for sprint planning. Regular backlog grooming sessions
also help ensure the right stories are prioritized.
Sprint planning meeting (is nothing but Dev and QA team will give estimations for each and every user
story, we are giving estimations by using ABACUS model, kind of ‘T-Shirt’ size model like Small, Medium
and Large, we are having A, B and C categories like, for critical user stories, we will give A- it means 7
PD’s (Person days) 3 days for test design and 4 days for test execution, B-it means Medium 4 PD’s, 1 day
for test design and 3 days for execution, C-Easy – it means 1 PD, 0.5 days for test design, 0.5 days for
Test execution. Roles: conducted by Scrum master, Product Manager, and Scrum Team (dev and QA)).
Sprint retrospective meeting (is nothing but feedback meeting, which we discuss what went well? and
what not went well? so that improvements will be carry forwarded to the next sprint, and mistake will
not be repeated in next sprints. Roles: conducted by Scrum master, will be conducted after the sprint
execution
<<Use Case / User Story: User case/ User Story describes the functionality in terms of actor, actions, and
response. Means, who is the role will perform this action and what is the response. All the requirements
will be mentioned in terms of acceptance criteria.
<As a [role], I want [objective], So I [benefit]>. E.g., As a Admin user, I want to update the details,
<<Epic: are large work that can be broken down into a number of smaller tasks (called stories).
After getting build from Dev team, QA team will conduct overall testing on the released build, whether it
is proper or not.
Usually, we will verify
1. Whether at least one happy path scenarios are working fine or not.
2. Whether one can navigate to all the pages of the application or not.
1. Whether all the important features are available or not.
<<Smoke Testing: just before releasing the build to testers, the developers will check whether the build
is proper or not, before sending to the QA team, that is known as smoke testing.
Now a days, most of the companies, even testers are calling sanity as smoke, but smoke done by
developers, sanity done by testers. It depends on the company that we can call it upon.
Smoke Testing is done by both developers or testers whereas Sanity Testing is done by testers.
Smoke Testing verifies the critical functionalities of the system whereas Sanity Testing verifies
the new functionality like bug fixes.
Smoke testing is a subset of acceptance testing whereas Sanity testing is a subset of Regression
Testing.
Smoke testing is documented or scripted whereas Sanity testing isn’t.
Smoke testing verifies the entire system from end to end whereas Sanity Testing verifies only a
particular component.
Which test comes first smoke or sanity?
Smoke tests are performed first followed by sanity tests. During the early phases of the software
development life cycle (SDLC), smoke testing is performed. While sanity testing is performed during the
final phases of SDLC.
The QA team may have to run Sanity, Smoke, and Regression tests on their software build depending on
the testing requirements and time constraints. In such instances, smoke tests are performed first,
followed by sanity testing, and then regression testing is scheduled based on the time available. Smoke
testing takes place as soon as the build is installed, whereas sanity testing takes place once the problem
fixes are completed.
Smoke Sanity
Basis of The major goal of this testing is to ensure that The major goal of this testing is to determine the
testing the newly generated build is stable enough to system’s rationality and correctness to ensure that
withstand further rigorous testing. the proposed functionality performs as intended.
Smoke Testing is first performed on initial Sanity Testing is performed on stable build or for
Performed
build. the new features in the software
on
Covers end-to-end basic functionalities of the Covers specific modules, in which code changes
Coverage system. have been made
1 Smoke testing means to verify (basic) that the Sanity testing means to verify the newly added
implementations done in a build are working fine. functionalities, bugs etc. are working fine.
2 This is the first testing on the initial build. Done when the build is relatively stable.
Test case is nothing, but it is an idea of test engineer to test something based on the customer
requirements.
Test case ID, Test case description, Precondition, Test data, User action, Expected Result, Actual Result,
Status (Pass/Fail).
Once QA found the defect, QA will create the defect with the “Open” status, then assigned to respective
developer name, DEV (Developer) is not going to fix the defect right away, because they will ensure it,
whether it is really a defect or not.
If it is not defect, he writes appropriate comments in “comments” field and assigned back to QA with
“rejected” or “not a defect” or “Duplicate” status, because the same defect has been raised by other QA
member then they will set the status to ”Duplicate”.
If it is valid defect, then developer will start work on it and change the status to “DEV In Progress”,
and once his fix is done and assign the defect to QA with “Ready to Retest” status.
Once QA started working on it, QA change the status to “Retest In Progress” status.
If the defect is working fine, then QA will change the status to “Retest Passed” status.
if it is failed, QA will change the status to “Retest Failed” then set the status to ‘Reopen’ and assign back
to the Developer for further fix with appropriate comments and screenshots.
By using this Requirements Traceability Matrix, we can ensure that, all the requirements are covered as
part of testing, and in terms of writing test cases, because it is having linking information, for each
acceptance criteria is covered with the test cases or not, and any defects are created for that
requirement, it will be having the clear information for one requirement is having how many test cases
and defects are identified.
In some companies, finally they will do random testing, even that will fall under the category of
regression testing.
<<Regression Scope:
As I mentioned earlier, first week, we are doing sprint testing and the second week we are doing the
regression testing, in that, we will identify the scope for each user story, what are the related
functionalities, on that basis and we will prepare the regression scope, whether we need to run the full
regression, or the partial regression depends on the user story. So, before sprint planning only, we will
know what the user stories are going for the sprint, so we will prepare the regression pack accordingly.
What is <<retesting?
It is a type of testing, in which one will perform testing on the same functionality again and again, with
multiple sets of data, whether it is working fine or not.
It is a type of testing, in which we will be testing in our own style, after understating the requirements
very clearly. Generally, this type of testing can be encouraged after formal testing.
We can expect more defects in this testing because; we will test the application with random scenarios.
What is <<exploratory testing? It is a type of testing in which one will perform testing on the application
without having the knowledge on requirements, just by exploring the functionality. The main intention
of conducting this testing is to understand and know about the application.
Usually, we will try to understand the application behavior by reaching each and every corner of the
application.
<<Methods of testing?
1. <<BBT-Black Box Testing: It is a method of testing in which one will perform testing only on the
functional part (UI – User Interface) of an application. (Without having the knowledge on
structural part).
Usually, the test engineers will perform it.
2. <<WBT-White Box Testing: It is a method of testing in which one will perform testing on the
structural (coding) part of an application.
Usually, the developers will perform the white box testing nothing but unit testing and
interfaces testing.
3. <<GBT-Gray Box Testing: It is a method of testing in which one will perform testing on both the
functional & structural part of application.
Usually, the black box test engineers who have the knowledge on structural part will be involved
or will perform it.
3. Integration level testing: In this state the developers will be developing some interfaces to
integrate the module. The white box test engineers will be testing weather those interfaces are
working properly or not.
Usually, developers follow one of the following approaches for integrating the module.
Top-down approach: In this approach the parent module will be developed first and then the related
child modules will be developed and integrated.
Bottom-up approach: In this approach child modules will be developed first and then corresponding
parent modules will be developed and integrated.
Hybrid or sandwich approach: This approach is a mixture of the both Top down and bottom-up
approach.
Big Bang approach: In this approach one will wait till all the modules are developed and finally will
integrate at a time.
<<Stub: While integrating the modules in Top-down approach if at all any mandatory module is missing
then that module is replaced with a temporary program is known as stub.
<<Driver: While integrating the modules in Bottom-up approach if at all any mandatory module is
missing then that module is replaced with a temporary program is known as driver.
4. System level testing: In this stage the Black box test engineers will conduct so many types of
testing one among those important one is System integration testing.
<<SIT-System integration testing: It is a type of testing in which one will perform the actions at one
module and check for the reflections in all the related areas of the application. In this testing, we will
observe the how the data is flowing from one module to another module.
5. <<UAT: User acceptance level testing: Usually, once we provide signoff for QA environment,
we will deploy the code to UAT environment, business users will conduct the testing on the
application, according to their business, whether application is working fine or not. Sometimes,
we will give support for UAT testing.
It is a process of checking conduct on each role in the organization in order to confirm, whether they are
doing their work according to the company process guidelines or not.
<<Validation Phase: It is a process of checking conduct on the developed product or its related part,
What is <<JIRA?
JIRA is nothing but it is a reporting tool, where all project related activities and information will be
available.
Tasks allocation and various stages are available to progress the tasks. JIRA can have the agile board
with sprint wise and product backlog items that we can see, as I mentioned, it is a kind of reporting tool
and can be used by project team.
<<JIRA - What are the common fields we can see while creating defect?
1. Summary
2. Description
3. Severity
4. Priority
5. Assignee
6. RCA
7. Attachments
8. Link to
9. Comments
<<Entry Criteria: talks about When to Start testing, means, environment readiness for testing, all
necessary actions should work, to continue the testing. Like ‘Sanity testing’ must be passed,
then only we can continue testing.
<<Exit Criteria: talks about When to Stop the testing, means, as part of testing, whenever we are
not having the high severity defects, then only we can stop our testing, and we will provide
signoff for the current sprint.
We have various test design techniques nothing but BVA, ECP, Decision Table.
Test design techniques will be useful to write the test cases in less number, but with more test coverage.
coming to the boundary value analysis.
Boundary Value Analysis: is nothing but whenever any range kind of values to be tested as part of the
requirement.
So, we will go with like n+1, n-1 and n. Example, there is an election portal, who can vote 18 plus years
of age candidate, so, for that, we can go, system should allow only 18 plus years of age. So, we will write
test cases in 18+1, 18-1 and 18. So, instead of writing 1 to 18, we will write only 3 test cases which
covers more test coverage.
Equivalence class partition: Equals class partition is nothing but whenever one requirement is having
multiple requirements, like one generic example, whether the password is having different conditions
like a password should have capital letters, and small letters and numeric values and special characters.
So, we will divide each requirement separately and we will take the boundaries for each one. And we'll
start writing test cases.
Decision table: whenever one functionality is having different flows, and we will mention in tabular
format and whenever particular condition met, what is the system behavior whether it is passed or not,
we will mention and we will test it accordingly. here we will mention End2End flows to test.
♦ Indemnity plan - A type of medical plan that reimburses the patient and/or provider as expenses are
incurred.
♦ Conventional indemnity plan - An indemnity that allows the participant the choice of any provider
without effect on reimbursement. These plans reimburse the patient and/or provider as expenses are
incurred.
An indemnity plan where coverage is provided to participants through a network of selected health care
providers (such as hospitals and physicians). The enrollees may go outside the network, but would incur
larger costs in the form of higher deductibles, higher coinsurance rates, or non discounted charges from
the providers.
♦ Exclusive provider organization (EPO) plan - A more restrictive type of preferred provider organization
plan under which employees must use providers from the specified network of physicians and hospitals
to receive coverage; there is no coverage for care received from a non-network provider except in an
emergency.
♦ Health maintenance organization (HMO) - A health care system that assumes both the financial risks
associated with providing comprehensive medical services (insurance and service risk) and the
responsibility for health care delivery in a particular geographic area to HMO members, usually in return
for a fixed, prepaid fee. Financial risk may be shared with the providers participating in the HMO.
♦ Group Model HMO - An HMO that contracts with a single multi-specialty medical group to provide
care to the HMO’s membership. The group practice may work exclusively with the HMO, or it may
provide services to non-HMO patients as well. The HMO pays the medical group a negotiated, per capita
rate, which the group distributes among its physicians, usually on a salaried basis.
♦ Staff Model HMO - A type of closed-panel HMO (where patients can receive services only through a
limited number of providers) in which physicians are employees of the HMO. The physicians see patients
in the HMO’s own facilities.
♦ Network Model HMO - An HMO model that contracts with multiple physician groups to provide
services to HMO members; may involve large single and multispecialty groups. The physician groups
may provide services to both HMO and non-HMO plan participants.
♦ Individual Practice Association (IPA) HMO- A type of health care provider organization composed of a
group of independent practicing physicians who maintain their own offices and band together for the
purpose of contracting their services to HMOs. An IPA may contract with and provide services to both
HMO and non-HMO plan participants.
♦ Point-of-service (POS) plan - A POS plan is an "HMO/PPO" hybrid; sometimes referred to as an "open-
ended" HMO when offered by an HMO. POS plans resemble HMOs for in-network services. Services
received outside of the network are usually reimbursed in a manner similar to conventional indemnity
plans (e.g., provider reimbursement based on a fee schedule or usual, customary and reasonable
charges).
4 ♦ Physician-hospital organization (PHO) - Alliances between physicians and hospitals to help providers
attain market share, improve bargaining power and reduce administrative costs. These entities sell their
services to managed care organizations or directly to employers. Managed care plans - Managed care
plans generally provide comprehensive health services to their members and offer financial incentives
for patients to use the providers who belong to the plan. Examples of managed care plans include:
3. If at all they get any doubts, then they will list out all those doubts in requirements clarification note.
4. They will send that RCN document to the BA and get the clarification.
5. If some more doubts are there then they will involve in the review meeting and will get the
clarifications.
6. Once all the requirements are clearly understood they will take test case template and will write the
test cases.
7. Once the first build is released, they will execute the test cases.
8. If at all any defects are found, then they will list out all of them in reporting tool like, JIRA and assign
to the developer.
9. Once the next build is released, they will re execute the failed test cases, if at all any defects are
found, they will update the same in reporting tool like JIRA and then assign to the developer and will
wait for the next build.
10. Once the next build is released, then the same process will be repeated till the test engineer product
is defect free.
Definition:
Defect Triage Meetings are project meetings in which open bugs are divided into categories. These
meetings are held to analyze defects and to derive actions to be taken on them. Basically, priority and
severity are defined for the bugs. Other activities involve assigning or rejecting new defects created from
the last triage meeting. Apart from that, existing defects are reassigned if need arises.
Procedure:
Generally, below procedure is followed for Defect Triage Meeting.
o QA lead sends out a bug report with the new defects introduced since last meeting.
o QA lead calls out a meeting.
o During meeting, each defect is analyzed to see whether correct priority and severity are
assigned to it. Priority and severity are corrected if need be.
o Defects are discussed by the team. This involves discussing complexity of the defect, risks,
o Assignment, rejection, reassignment of defects is done. Updates are captured in bug
tracking system.
Defect Triage Meeting goals:
The goal of Triage remains the same: evaluate, prioritize and assign the resolution of defects. As a
minimum, you want to validate defect severities, make changes as needed, prioritize resolution of the
defects, and assign resources.
Involved parties:
Below project members are involved in Defect Triage Meetings.
o Project Manager
o Test Lead
o Tech Lead
o Development Lead/Developer
Priority is nothing but how soon dev has to fix the defect, defects will be fixed on the basis of the
priority. Because sometimes high severity defect needs to be fixed for next sprint, but low severity
defect needs to be fixed on the priority.
1) Severity:
It is the extent to which the defect can affect the software. In other words, it defines the impact that a
given defect has on the system. For example: If an application or web page crashes when a remote link
is clicked, in this case clicking the remote link by a user is rare but the impact of application crashing is
severe. So, the severity is high, but priority is low.
Critical: The defect that results in the termination of the complete system or one or more
component of the system and causes extensive corruption of the data. The failed function is
unusable and there is no acceptable alternative method to achieve the required results then the
severity will be stated as critical.
Major: The defect that results in the termination of the complete system or one or more
component of the system and causes extensive corruption of the data. The failed function is
unusable but there exists an acceptable alternative method to achieve the required results then
the severity will be stated as major.
Moderate: The defect that does not result in the termination, but causes the system to produce
incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
Minor: The defect that does not result in the termination and does not damage the usability of
the system and the desired results can be easily obtained by working around the defects then
the severity is stated as minor.
Cosmetic: The defect that is related to the enhancement of the system where the changes are
related to the look and field of the application then the severity is stated as cosmetic.
2) Priority:
Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This
priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high
priority is mentioned, then the developer must fix it at the earliest. The priority status is set based on
the customer requirements. For example: If the company name is misspelled in the home page of the
website, then the priority is high, and severity is low to fix it.
Low: The defect is an irritant which should be repaired, but repair can be deferred until after
more serious defect have been fixed.
Medium: The defect should be resolved in the normal course of development activities. It can
wait until a new build or version is created.
High: The defect must be resolved as soon as possible because the defect is affecting the
application or the product severely. The system cannot be used until the repair has been done.
High Priority & High Severity: An error which occurs on the basic functionality of the application and will
not allow the user to use the system. (E.g., When we are submitting the policy details, on saving record if
it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of
an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which
there is no workaround) and will not allow the user to use the system but on click of link which is rarely
used by the end user. E.g, If an application or web page crashes when a remote link is clicked, in this
case clicking the remote link by a user is rare but the impact of application crashing is severe. So, the
severity is high, but priority is low.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the
report (Not on cover page, heading, title).
<<Test Strategy vs <<Test Plan
Test Strategy Test Plan
Test Strategy is a high level document which Test plan document is a document which contains
captures the approach on how we go about the plan for all the testing activities to be done to
testing the product and achieve the goals. deliver a quality product.
Components of Test strategy includes Scope Components of Test plan includes Test Plan
and overview, Test Approach, Testing tools, Identifier, Features To Be Tested, Features Not To Be
Industry standards to follow, Test Tested, Approach, Pass/Fail Criteria, Suspension
deliverables, Testing metrics, Requirement Criteria, Test Deliverables, Responsibilities, Staffing
Traceability Matrix, Risk and mitigation, and Training Needs, Risks and Contingencies etc.,
Reporting tool, Test summary
It is derived from the Business requirement It is derived from the Product Description, SRS, or
specifications (BRS) Use Case documents
<<PBT - Priority Based Testing - Generally, in this particular scenario, whenever we are having a less time
and we have more testing to do, then we will go with the priority based testing in this priority based
testing, we will verify the high level scenarios which are important to the client for particular user story
and we will we will give the conditional signoff for the QA environment and we will do the detail testing
in the regression phase, after sprint testing, usually we run full regression pack to ensure, because of
these new functionality implementations in current sprint, existing functionality should not be impacted.
<<V Model:
<<Verification Phase: It is a process of checking conduct on each role in the organization in order to
conform whether they are doing their work according to the company process guidelines or not.
<<Validation Phase: It is a process of checking conduct on the developed product or its related part to
conform whether they are working according to the expectations or not.
1. In the daily status mail, which we send across the stakeholders, we will mention it like means
how many test cases are executed, how many test cases are planned to execute in this current
sprint, and at the same time we will mention the status, RED, AMBER and GREEN status.
2. Let's say if everything goes fine and we will mention the daily status mail in green. If something
goes wrong and completely blocked and we will mention red and if we are getting delay in in
terms of build and if you have any kind of dependencies with any other thing, and then we will
mention the daily status mails with Amber.
3. So, as we're communicating on daily basis with the client, through this email, even client will
have an idea okay, how much testing is completed and how much testing is not completed?
4. And if you have any dependencies and we will mention in terms of status so that we will be
communicated everything with the client on daily basis, so if we get any last-minute surprises
even client will have an idea.
5. Let's say, if you are sending in mail with RED also, immediately in the tomorrow’s daily stand-up
meeting, and they will address the same concern and why we are in RED so that we will address
the same thing and we will get the solution and it will be addressed as soon as possible.
6. So instead of waiting for the last minute and we will communicate to the client, so it is always
good to send an email and sending the entire execution status accordingly.
What is <<API?
API means – Application Programming Interface. We are sending request and we will verify the
response.
Another generic example, when we use phone pe, we can also recharge by selecting the relevant
network. When we select any network, the moment when we type the mobile number, ‘Phone Pe’ will
send the information to the network operator, it will show the relevant plans to the entered mobile no.
so it means, we are sending request and we are getting response.
<<Payload - In simple words, the payload means body in the HTTP request and response message. It's
optional and depends on the HTTP method name i.e.,
The 'Content-Type' header name in the HTTP request message is used to represent payload format in
the HTTP request message. For example: JSON.
JSON will have parameters and values: customerId, customerName and emailID.
{
"customerId": 1,
"customerName": "Ramesh",
"email": "[email protected]"
}
<<HTTP these are the methods, GET, POST, PUT and DELETE
<<POST: If you want to create a record in other external system, I mean by sending request with data,
once record gets created, 201 status will be displayed. It creates a record.
<<GET: If we want to read the details from the external system, I mean by sending request with data,
once record gets fetched, 200 status will be displayed. Example: if you want to read the customer’s
policy details by policy ID in JSON request, with GET method, and then we will get the matching
response details.
<<PUT: If we want to update the details from the external system, I mean by sending request with data,
once record gets fetched 201 status will be displayed. Example: if you want to update the customer’s
name or phone no, we will send HTTP request with PUT, and then we will update the details.
<<DELETE: If we want to delete the details from the external system, I mean by sending request with
data, once record gets fetched 200 status will be displayed. Example: if you want to delete the
customer’s policy details then we will send the policy ID, HTTP request with DELETE, and then we will
delete the details.
Again if we will try to fetch the details with same policy ID, we will get the 404 NOT FOUND message.
Because it is already deleted from the system.
<< Token-based authentication - Token-based authentication is a process where the user sends his
credential to the server, server will validate the user details and generate a token which is sent as
response to the users, and user store the token in client side, so client do further HTTP call using this
token which can be added to the header, server validates the token and send a response.
e.g. while logging into your email account, we have to provide Username and a Password. If you have
the Username and the Password, system will validate whether the logged user is valid or not.
In the context of REST API authentication happens using the HTTP Request.
Note: Not just REST API, authentication on any application working via HTTP Protocol happens using
the HTTP Request.
Taking the example of email login, we know that in order to Authenticate our self we must provide a
username and a Password. In a very basic Authentication flow using Username and Password, we will do
the same thing in rest API call as well. but how do we send the Username and Password in the rest
request?
A rest request can have a special header called Authorization Header, this header can contain the
credentials (username and password) in some form. Once a request with Authorization Header is
received, server can validate the credentials and can let you access the private resources.
The expected data for each request and the most present responses.
<<Difference between SOAP and REST: SOAP uses only XML for exchanging information in its message
format, whereas REST is not restricted to XML and its the choice of implementer which Media-Type to
use like XML, JSON, Plain-text. Moreover, REST can use SOAP protocol but SOAP cannot use REST.
<<Create:
<<Read
<<Update
UPDATE emp SET empID = value1, Dept = value2 WHERE sal>=1000;
<<Delete