Integrated Language Testing
Integrated Language Testing
Integrated Language Testing
GROUP- 13
Arora,Kashish (1049490)
Potu,Raja Sathvik (1056684)
Yenkammagari,Pranay Reddy (1023479)
Abstract
This paper is all about the configuration of a non specific, explanatory
test particular dialect for dialect definition testing. A completely dialect
rationalist way to deal with dialect installing that fuses syntactic,
semantic, and manager administration parts of a dialect under test. The
usage of such a testing dialect as the Spoofax testing dialect and a
portrayal of its execution building design.
Outlines
Introduction :- background on language definitions.
Section 2 :- the design of a language parametric testing language from
three angles
Section 3 :- purely linguistic perspective
Section 4 :- tool support perspective
Section 5 :- illustrating use cases with examples.
Section 6 :- implementation architecture is described.
Section 7 :- We conclude with related work on language testing
approaches and directions for future work.
1. Introduction
Programming dialects give phonetic deliberations to a space of
processing. Device backing gave by compilers, mediators, and
coordinated advancement situations (IDEs), permits engineers to
reason at a certain level of deliberation, decreasing the incidental
intricacy included in programming improvement (e.g., machineparticular calling traditions and express memory administration).
Space particular dialects (DSLs) further expand expressivity by
confining the extension to a specific application area. They expand
engineer benefit by giving area particular documentation,
examination, check, and enhancement.
Slips in compilers, translators, and IDEs for a dialect can prompt
off base execution of right projects, lapse messages about right
projects, or an absence of mistake messages for wrong projects
Testing is one of the most important tools for software quality control
and inspires confidence in software
Scripts for automated testing and general-purpose testing tools such
as the xUnit family of frameworks have been successfully applied to
implementations of general-purpose languages and DSLs.
Universally useful testing procedures, as bolstered with xUnit and
testing scripts, require noteworthy interest in base to cover
experiments identified with punctuation, static semantics, and
supervisor administrations, particular for the tried dialect.
In this paper, we exhibit a novel way to deal with dialect definition
testing by presenting the idea of a dialect parametric testing dialect
(LPTL). This dialect gives a reusable, bland premise for definitively
determining dialect definition tests.
The focal objective set out for outline of a LPTL is to give a low-limit
test particular dialect that structures the premise for a reusable
foundation for testing diverse dialects
Figure 2 shows an example test case where we test the mobl language, a
domain-specific language for mobile applications. In this example we
declare a local variable s of type String and assign an integer literal value
to it. This is a negative test case: a value of type String would be
expected here. The conditions clause of this test case indicates that
exactly one error was expected here, which means that the test case
passes.
The table shows the possible condition clauses for syntactic, static
semantic, and dynamic semantics tests, and the patterns that can be used
with some condition clauses
4.2 RunningLanguageDenitionTests
Live evaluation of test cases : These test cases evaluates tests in the
background and shows which tests fail through error and warning
markers in the editor. With this feedback, developers can quickly
determine the status of tests in a testing module.
Batch Execution: To support long-running test cases and larger test
suites, a batch test runner is introduced as a test runner is particularly
important as a language project evolves and the number of tests grows
substantially and tests are divided across multiple test modules. The
test runner gives a quick overview of passing and failing tests in
different modules and allows developers to navigate to tests in a
language project.
Entities are persistent data types that are stored in a database and can
be retrieved using mobls querying API. i.e., tasks::datamodel.
An example of a mobl module that denes a single entity type is as
follows
module tasks::datamodel
entity Task {
name : String
date : DateTime
}
Syntax tests can be used to test newly added language constructs. They
can include various non-trivial tests such as tests for operator precedence,
reserved keywords, language embeddings, or complex lexical syntax such as
the quotation construct. Two forms of tests, they are black-box test (which
test if a code fragment can be parsed or not) and tree patterns to match
against the abstract syntax produced by the parser for a given fragment.
With tests we can have better condence in the static checks dened for
a language. For example, we use a setup block to import the
tasks::datamodel module,and to initialize a single Task for testing.
The rst test case is a positive test, checking that the built-in all() accessor
returns a collection of tasks. The other tests are negative tests. For such test
cases, it is generally wise to test for a specic error message. We use regular
expressions such as /type/ to catch specic error messages that are expected.
Example
language mobl
setup [[
module tasks
import tasks::datamodel
var todo = Task(name="Create task list");
]]
test Entity types have an all() built-in [[
Collection<Task> = Task.all(); ]]
succeeds
test Assigning a property to a Num [[
var name : Num = todo.name;
]] 1
error /type/
var all :
5.3 Navigation
Modern IDEs provide editor services for navigation and code understanding, such as reference
resolving and content completion.
rst test case tests variable shadowing, while the second one tests reference resolving for function calls.
For content completion we test completion for normal local variables, and for built-ins such as the
all() accessor.
test Resolve a shadowing variable [[
function getExample() : String {
var [[example]] = "Columbus";
return [[example]];
}
]] resolve #2 to #1
test Resolve a function call [[
function [[loop]](count:Num){
[[loop]](count+1);
}
]] resolve #2 to #1
test Content completion for globals [[
var example2 = [[e]];
]] complete to "example"
test Content completion for queries [[
var example2 = Task.[[a]];
]] complete to "all()"
6. Implementation
In this section we describe our implementation of an LPTL and the
infrastructure that makes its implementation possible.
We implemented the Spoofax testing language as a language
definition plugin for the Spoofax language workbench.
Most Spoofax language definitions consist of a combination of a
declarative SDF syntax definition and Stratego transformation rules
for the semantic aspects of languages, but for this language we also
wrote parts of the testing infrastructure in Java.
6.1 Infrastructure
The Spoofax language workbench provides an environment for
developing and using language definitions. It provides a number of key
features that are essential for the implementation of an LPTL.
A central language registry :Spoofax is implemented as an extension of the IDE Metatooling Platform (IMP) [7] which provides the notions of languages and
a language registry. It also allows for runtime reflection over the
services they provide and any meta-data that is available for each
language, and can be used to instantiate editor services for them
Dynamic loading of editor services:Spoofax supports dynamic, headless loading of separate language
and editor services of the language under test. This is required for instantiation
of these services in the same program instance (Eclipse environment) but
without opening an actual editor for them
Functional interfaces for editor services :Instantiated editor services have a functional interface. This
decouples them from APIs that control an editor, and allows the LPTL to
inspect editor service results and filter the list of syntactic and semantic error
markers shown for negative test cases.
Support for a customized parsing stage :Most Spoofax plugins use a generated parser from an SDF
definition, but it is also possible to customize the parser used. This allows the
LPTL to dynamically embed a language under test.
Test evaluation
Tests are evaluated by instantiating the appropriate editor services for the
language under test and applying them to the abstract syntax tree that
corresponds to the test input fragment.
Execution tests often depend on some external executable that runs
outside the IDE and that may even be deployed on another machine.
Our implementation is not specific for a particular runtime system or
compiler backend.
Instead, language engineers can define a custom runner function that
controls how to execute a program in the language
Test evaluation performance : Editor services in Spoofax are cached with instantiation, and run in a
background thread, ensuring low overhead and near-instant
responsiveness for live test evaluation.
Most editor services are very fast, but long-running tests such as
builds or runs are better executed in a non-interactive fashion.
We only run those through the batch test runner and display
information markers in the editor if the test was changed after it last
ran.
A second issue is that to make sure tests are considered in isolation, each
test case is generally put in a separate file.
Using separate files for test cases introduces boilerplate code such as
import headers.
Another limitation of these general testing tools is that they do not
provide specialized IDE support for writing tests.
Standard IDEs are only effective for writing valid programs and report
spurious errors for negative test cases where errors are expected.
Testing with homogeneous embeddings : Language embedding is a language composition technique where
separate languages are integrated.
The embedding technique applied in this paper is a heterogeneous
embedding.
Homogeneously embedded languages must always target the same host
language.
Unit tests for domain-specific languages : As a side-effect of providing a language for testing language
implementations, our test specification language can also be used to test
programs written in that language.
This makes it particularly useful in the domain of testing DSL programs,
where testing tools and frameworks are scarce.
The combined scripting and DSL language can then be used to specify
tests.
Their framework ensures that the mapping between the DSL test line
numbers and the generated JUnit tests is maintained for reporting failing
tests.
They also provide a graphical batch test runner.
Summary
Good approach to language definition testing by
introducing the notion of a language parametric testing
language.
The LPTL provides a zero threshold, domain-specific
testing infrastructure based on a declarative test
specification language and extensive tool support for
writing and executing tests.
Implementation in the form of the Spoofax testing
language shows the practical feasibility of the
approach.
Tests inspire confidence in language implementations,
and can be used to guide an agile, test-driven language
development process.
References
[1] M. Fowler. Language workbenches: The killer-app for domain
specic
languages?
http://martinfowler.
com/articles/languageWorkbench.html, 2005.
[2] L. C. L. Kats and E. Visser. The Spoofax language workbench: rules
for declarative specication of languages and IDEs. In W. R. Cook, S.
Clarke,
and
M.
C.
Rinard,
editors,
ObjectOrientedProgramming,Systems,Languages,andApplications, OOPSLA
2010, pages 444463. ACM, 2010.
[3] B. Beizer. Software testing techniques. Dreamtech Press, 2002.