Multi-Level Testing Approach for Multi Agent Systems Yacine Kissoum1


Multi-Level Testing Approach for Multi Agent Systems
Yacine Kissoum1, Sara Kerraoui2, Moussa Saker3
1,2,3 Faculty of Sciences, Computer Science Department, 20 Août 1955 University, Skikda. Algeria
{kissoumyacine, sak_moussa}@[email protected]
Abstract. At the end of software life cycle there is a need of product testing. Although testing has been a popular activity for traditional applications, there is a crucial lack in testing phases of multi agent systems. The main reason is that the relevant process management about testing activity has usually been neglected at the expense of the other life cycle steps. For this a call for an investigation of appropriate testing techniques is necessary to provide adequate software development processes and supporting tools. Among all existing solutions of test, the model based testing technique has gained attention with the popularization of models both in software design and development. This technique uses a so called abstract test model to generate abstract test cases. After their concretization, concrete test cases are submitted to the system under test. The systems outputs are finally compared to the abstract test model.

In this context we propose a model based testing approach for multi agent systems based on Reference net paradigm. A case study supported by a multi-agent testing tool, which aims at simplifying and providing a uniform and automated way for testing multi-agent systems.

Keywords: Multi agent systems, Nets within nets, Reference nets, Renew, Model based testing.

Introduction
Agents and multi agent systems are a promising technology for building complex applications that operate in dynamic domains, often distributed over multiple sites. Testing of multi agent systems is a challenging task that asks for new methods dealing with the specific nature of such systems. These methods need to be effective and adequate to evaluate agents’ behaviors and build solid user confidence.
With the current state of the technology, we have to recognize that the final implementation of multi agent system will most likely use object-oriented technology. That is, developers express interaction, coordination and deliberation concerns through an agent-level API, resorting to plain object orientation when specifying ordinary computation tasks. However, representing the dynamics of a multi agent system is quite different from describing the flow of control of an object-oriented system. The first determinant difference is in the greater encapsulation of the agents; in fact despite having a very complex inner structure, they interact with the remaining part of the system as a whole. Another important issue is that agents, usually, cannot directly relate to each others but they need a message transport service that in many architectures is provided by the middleware. These interactions are obviously totally different in nature by the direct method invocations that could take place within the agent among the classes that constitutes it.

Applying the above considerations, multi agent systems can be considered, from tester’s point of view, as a number of different levels of abstraction. These levels will be named the algorithmic, class, agent, society, and system levels, figure 1. They are defined as follow 2, 5.

8280403810000
Fig. SEQ “Figure” * MERGEFORMAT 1. Multi agent testing levels
The algorithmic level considers code at routine level. It concerns the manipulation made within a routine with respect of some data. It is comparable to normal code testing with conventional imperative languages.

The class level consists of the interactions of routines and data that are encapsulated within a class. This level perceived year ago by the object oriented community and it has led to the emergence of the JUnit 6 testing framework.

The agent level considers the interactions of groups of cooperating classes. At this level we have to test the integration of the different modules inside an agent, test agents’ capabilities to achieve their goals and to sense and affect the context.
The society level consists of the interactions of the overall results of different agents. The society level testing is a kind of integration testing and the integration strategy depends on the agent system architecture where agent dependencies are usually in terms of communications and sometimes environment-mediated interactions could be present. Integration testing involves making sure an agent works properly with the agents that have been integrated before it and with the agents that are in agent testing phase.
The system level contains all code from all classes and main program necessary to run the entire system. Explicitly, agents may operate correctly when they run alone but incorrectly when they are putted together.
One of the new approaches to meet the challenges imposed on software testing in general and multi agent testing particularly is the model based testing technique (MBT) 2. It has recently gained attention with the popularization of models both in software design and development. The idea of such technique is to have a model of the system and use this model to generate sequences of input and expected output. The input, after concretization, is applied to the system under test and the system’s outputs are compared to the model’s outputs, as given by the generated sequence.
For the modeling step, there are a number of models in use today; some of them make good models for testing. The choice depends on aspects of the system under test. The proposed approach for multi agent system testing uses the MULti Agent Nets (MULAN) architecture 19 to model the system under test. MULAN is based on the nets within nets paradigm and is used to describe the natural hierarchies in an agent system. MULAN is implemented in Renew (Reference Net workshop) 16 and is renowned for its perfect adhesion to the mechanism of composition in multi agent systems: agent protocols are composed to agent behavior, agent behavior is composed to agents, agents are composed to groups, groups are composed within agent platforms and platforms are composed to agent systems.

Concerning the test cases generation step, we can say that a system is exhaustively tested if every possible execution of the system is verified. Obviously, this is not possible for any non-trivial system. Not only because there are an exponential number of execution paths, multi-agent systems are multi-threaded and non-deterministic. That is, executing the system several times with the same test case may yield different execution path and sometimes, different results depending on the execution path taken. Pragmatically, only a subset of the executions can be tested. The obvious strategy, then, is to try to reach as close as possible the tester’s confidence about the correctness of the system under test. The testing approach for multi-agent systems proposed so far uses an error-guessing 24 test-case design technique. In an error-guessing technique the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Once generated, these abstract test cases have to be concretized. The concretization step acts as a translator which bridges the abstraction gap between the test model and the system under test by adding missing information and translating entities of the abstract test case to concrete constructs of the test platform’s input language. This activity is a very difficult task because it invokes generally complex algorithms together with human tester intervention. To overcome such a handicap, we propose to reverse roles: rather than instrumenting the system under test, the proposed approach inserts automatically instruments in in the abstract test model. Thanks to the reference nets, which is an executable model, instrumented abstract scenarios are executed together with the system under test in order to detect failures (where the outputs of the system under test are different to those predicted by the test model).
The rest of this paper is organized as follow: section 2 surveys the state of the art of multi agent system testing approaches. In section 3 we give a short introduction on the multi agent net architecture. Section 4 describes the proposed approach. In section 5 we present the case study. Finally, section 6 concludes with discussion about issues and future works.

Related work
In this section we will discuss some of testing research related to multi agent levels of abstraction discussed in the introduction. Traditional test techniques (functional, structural, non-regression) have been fully addressed for the first level of abstraction. Furthermore, the need for a framework to support the development of automated tests was perceived years ago by the object oriented community and it has led to the emergence of the JUnit 6 testing framework. The other levels research works are discussed below.

There are several works that deal with the agent level testing. Tiryaki et al. 7 for example, proposed a test-driven multi agent system development approach that support iterative and incremental multi agent system construction. Their testing framework called SUnit, has been built on top of JUnit and Seagent 8. Coelho et al. 9 proposed a framework JAT similar to SUnit, but uses Mock Agents. Finally Houhamdi 10 introduces a suite test derivation approach for agent testing that takes goal-oriented requirements analysis artifact as the core elements for test case derivation.

For the society level testing, Padgham et al. 11 have used design artifacts from the Prometheus design process (e.g. agent interaction protocols and plan specification) to provide automatic identification of the source of errors detected at run-time. Rodrigues et al. 12 proposed to exploit social conventions (e.g. norms, rules) that prescribe permissions, obligations, and/or prohibitions of agents in open multi agent system to integration test.

We find in the system level testing the De Wolf et al. 13 approach in which they propose an empirical analysis approach combining agent-based simulations and numerical algorithms for analyzing the global behavior of a self-organizing system. Houhamdi and Athamena 14 introduced a suite test derivation approach for system testing that takes goal-oriented requirements analysis artifact as the core elements for test case derivation.

Whereas most of the existing researches on testing of multi-agent systems focus primarily on the agent or/and society level. Our approach touches all previous test levels: from the algorithmic level to the system, by taking advantage from the mechanism of composition in multi agent systems handled by reference net and the MULAN architecture.

The Multi agent nets
The paradigm of nets within nets due to Valk 4 formalizes the aspect that tokens of a Petri net can also be data types and even nets. Taking this in consideration it is possible to model hierarchical structures in an elegant way. An implementation of certain aspects of nets within nets is called Reference net 3. Reference nets are a graphical notation that are especially well suited for the description and execution of complex, concurrent processes. As for other net formalisms, there exist tools for the simulation of reference nets called Renew (for Reference Net workshop) 16. Reference nets extend black and colored Petri nets by means of net instances, nets as token objects, communication via synchronous channels, and different arc types. Definitions of these extensions are given in 17, 18.

The multi-agent system architecture (MULAN) 19 is based on the nets within nets paradigm and is used to describe the natural hierarchies in an agent system. MULAN is implemented in Renew 16 and has the general structure as depicted in Fig. 2. Each box describes one level of abstraction in terms of a system net.

Fig. SEQ “Figure” * MERGEFORMAT 2. Agent systems as nets within nets (Source 18)
The net in the upper left side of Fig. 2 describes an agent system, in which places contain agent platforms as tokens. The transitions describe communication or mobility channels, which build up the infrastructure. By zooming into the platform token on place p3, the structure of a platform becomes visible. The place Agents host all agents, which are currently on this platform. Each platform offers services to the agents. Agents can be created (transition new) or destroyed (transition destroy). Agents can communicate by message exchange. Two agents of the same platform can communicate by the transition internal communication. External communication only binds one agent, since the other agent is bound on another platform somewhere else in the agent system. Also mobility facilities are provided on a platform: agents can leave the platform via the transition send agent or enter the platform via the transition receive agent.

Each agent is also modeled in terms of nets. They are encapsulated, since the only way of interaction is by message passing. Agents are intelligent, since they have access to a knowledge base. The behavior of the agent is described in terms of protocols, which are again nets. Protocols are located as templates on the place protocols. Protocol templates can be instantiated, which happens for example if a message arrives. An instantiated protocol is part of a conversation and lies in the place conversations.

The proposed approach
Model-based-testing technique usually involves four stages: (1) building an abstract model of the system under test, (2) validating the model (typically via animation), (3) generating abstract tests from the model and (4) refining those abstract tests into concrete executable tests. After this, the concrete tests can be executed on the system under test, in order to detect failures (where the outputs of the system under test are different to those predicted by the tests), Fig. 3.

526679149225000
Fig. SEQ “Figure” * MERGEFORMAT 3. Model based testing principle
As it is mentioned in the introduction, multi agent systems can be considered, from tester’s point of view, as a number of different levels of abstraction: the algorithmic level, class, agent, society and system levels. Of the levels termed previously, the first is comparable to normal code testing with conventional imperative languages. For the second, JUnit is a simple yet practical framework for Java classes. The simplicity and usefulness of JUnit led to a panoply of extensions that supports other programmatic styles: DBUnit (for databases), NModel (for applications written in C#) and ModeJUnit framework 15 proposed by Mark Utting to handle the model based testing technique for Java classes. The principle of ModelJUnit aims at generation of test sequences from the FSM/EFSM models written in Java and measures different model coverage metrics. ModelJUnit has the features to automate both test generation and test execution.
The approach discussed in this paper has been inspired from ModelJUnit. The main difference resides in the fact that it is dedicated to multi agent applications. Moreover, where ModelJUnit uses finite state machine as testing model, our approach is based on Reference net. Finally, in ModelJUnit the tester should write test cases to be executed following the JUnit principles, in our approach the test cases are interactively generated from the abstract model, dynamically executed on system under test and the obtained results are compared to the expected ones.
The Fig. 4 summarizes the general architecture of our approach. It contains a series of four stages. They will be discussed in the subsequent.

Fig. SEQ “Figure” * MERGEFORMAT 4. The proposed approach
Building the abstract test model
As shown by the box numbered (1), the modeling phase is the first step in the model based testing technique. The purpose of an abstract test model is: to support the verification process and the validation process of the system under development. On the one hand the model serves as specification and is used to test a system’s implementation in order to verify the implementations behavior. On the other hand the model is used to formalize and to validate the system’s requirements. Due to these requirements the model abstracts as far as possible from implementation details and concentrates on the main aspects of the system.

To overcome the first purpose, the tester (the modeler) has to structure the multi agent system requirements according to the reference net-based multi-agent system architecture MULAN. We think that such robust and easy-to-use architecture reduces considerably the large initial effort in term of man-hours required mainly in constructing and validating the testing model. In fact, the reference net-based multi-agent system architecture MULAN structures a multi-agent system in four layers, namely infrastructure, platform, agent and protocol. In this way, the modeling process and the model validation are easier because of the smaller areas of each of such layers.

To second purpose is overcome via simulation. This step is schematized by the box numbered (2) on figure 4, and is discussed in the subsequent sub section.
Validating the model
Our formal abstract test model is usually constructed manually out of the systems requirements. Hence, its validation against system requirements specification should be done first in order to find gross errors in the test model, otherwise the derived test cases may be significantly flawed. This validation process is incomplete of course, but this is less crucial in this context, than the usual refinement to-code context. Indeed, because the abstract test model concentrates on the parts or certain aspects of the system under test that is to be verified, it is simpler and easier to understand than the whole complexity of the system under test. Hence, it can be managed intellectually, validated by testers/reviewers and maintained more easily. Even formal methods like formal validation approaches are applicable to reference net paradigm. Also, with MBT technique, if some errors remain in the model they are very likely to be detected when the generated test is run against the system under test. To conclude, tester can get the confidence more easily that the model meets the requirements of the system and thus the model serves as an abstract reference implementation of the system under test.

Test cases generation and concretization
The box (3) on figure 4 represents the test cases generation and concretization step. This is the most delicate phase and constitutes our main contribution. It can’t be conducted without the source code of the system under test and is directed under the responsibility of the tester (automation architect). It takes as inputs (i) an information file generated from the system under test, (ii) another deduced from the test model and (iii) an execution scenario that the tester wants to check against the system under test. The output is then a new version of the model (an instrumented version). The information files generated respectively from the abstract test model and from the system under test are used as base for the concretization step. The scenario is built according to the test specification.
More precisely, test cases are generated from the abstract test model, which is given as input to the test generators. In the literature 20, the test cases generation approaches have suffered from the combinatorial explosion problem of the number of test cases. For this, a test specification is necessary. It has two aims, first to define formally what will be tested and second to restrict the amount of resulting test cases. The test specification defines which of the potential available test model traces will form the test case. The type of a test specification can be categorized into functional, structural or stochastic specifications 21. In our case the functional specification is used in conjunction with structural test specifications. The idea of functional specifications is to design test cases concerning certain functionalities of the system to be tested and to structure them as an execution scenario. For the structural specifications, typical code coverage criteria (statement coverage, branch coverage and path coverage) can be used. These coverage criteria cannot be used directly in multi agent systems because the notions of statement, branch and path are not exactly the same as in the other programming paradigms 14. That is, statement coverage is insufficient in multi agent systems as a statement can succeed or fail. For branch coverage, we observe that branches in other programming paradigms are deterministic, while branches in multi agent systems are generally non-deterministic. Also, in traditional programming paradigms, a path is defined as the start to the end of the program. In a multi agent system, a path can terminate prematurely because statements can fail. Despite the inability to directly use the traditional methods for multi agent systems, many of the ideas are transferable in principle, and have led us to lift structural coverage criteria to the level of the test model. Henceforth we prefer speak about place coverage, transition coverage and path coverage criteria.
The proposed approach does not define a methodology for test case generation but rather an error guessing test case design technique. Such a technique is based on the ability of the tester to draw on his past experience, knowledge and intuition to predict where bugs will be found in the system under test. This is generally achieved by focusing on a special part, a special view of the system or some use cases in order to constrain the behavior of the test model to certain functionalities. Functional specifications cover I/O relations that refer either to scenarios of specification documents or other more special views that result from experience, like fault intensive known sections. Here tester combines with using simple test criteria like place coverage or path coverage with a given scenario length. By doing so, the number of possible test cases is reduced by defining additional functional constraints concerning the system itself or its environment. Here, depending on the testing level addressed, useful sources of information are activity diagrams and/or state charts (for the agent testing level), sequence diagrams and/or specification of protocols that regulates the interaction between multi-agent systems (for the society level) or finally, end-user and/or component interfaces (for the system level).
Concerning concretization, it is known that the generated test cases are abstract like the test model. Thus, significant information is missing in the generated test cases to be executable with the concrete system under test. So, these test cases have to be concretized. In other words, this step acts as a translator which bridges the abstraction gap between the test model and the system under test by adding missing information and translating entities of the abstract test case to concrete constructs of the test platform’s input language. This is a very hard task because it’s done generally by applying different algorithms (creator, adapter, translator) and needs the human tester intervention to tune up concrete test cases 22. In addition, the system under test instrumentation typically incurs some runtime overhead (hardware monitors may make an exception). To remedy these problems, rather than inserting instruments into the system under test, as it is done in the majority of model based approaches, our approach instruments the formal abstract test model. The system under test is run as it is and hence, no increase in execution time will felt.

The test execution
The last step is the execution phase shown in box (4) on figure 4. Having in hands the instrumented version of the abstract test model and the system under test, the tester have only to run the instrumented model. Renew relies entirely on simulation to explore the properties of a net, where test engineer (the modeler) can dynamically and interactively explore the state of the simulation. Thanks to the inserted instruments, this execution is directly reflected on the system under test. We say that test passes if the execution trace followed by the system under test is identic to the scenario that the tester wants to check. Otherwise any difference is interpreted as a fault and it is reported to the tester engineer. In both cases, the tester should decide between restarting the test cases generation and concretization step using another scenario or resetting the system under test in order to fix detected errors.

To better understand the proposed approach and to explain the different stages, we introduce as case study the well-known producer-consumer model.

Case study
3759201371600Sender
(Producer)
Receiver
(Consumer)
Network
Packet
Packet
Ack
Ack
00Sender
(Producer)
Receiver
(Consumer)
Network
Packet
Packet
Ack
Ack
The producer-consumer problem although simple, has been used so abusive in computer science literature to introduce Petri nets in general, to show different models of synchronization, coordination and communication. Fig. 5 describes our case study at a very abstract level. The producer (the sender) starts by disassembling the data to be sent into a set of packets that are directed, one by one, over a network to the consumer (receiver). Once received, the packets are assembled to reconstitute the original data and an acknowledgment is replied to the sender. That is, the transmission principle is a stop-and-wait protocol: the producer transmits a data packet at a time and must wait for an acknowledgment from the receiver.

Fig. SEQ “Figure” * MERGEFORMAT 5. Producer-consumer case study
First of all let us discuss how we modeled the case study of the producer consumer by means of the renew tool which is based on the reference nets paradigm.

When modeling a multi agent system it is often undesirable to see the overall complexity at every stage of modeling and/or execution. Therefore the notion of a system view is introduced. Several views on an agent system are possible, for example the overall multi-agent agent systems, the set of platforms that hosts the agents, the agent itself or simply its behavior. Using reference nets as a modeling paradigm allows for the direct use of system models at execution time. This can be exploited as follows:
The snapshot (a) is the main net (the system level). It simply creates the producer and the consumer subnets, respectively by means of c:new cons(p) and p:new (msg, c) messages. The variable msg (“This is a test message”) models the original data to be sent to the consumer. Snapshot (b) shows the structure of our autonomous agents, namely the producer agent and the consumer agent. They share the same structure but their behaviors are different. In fact once created, the producer instantiates proactively, using the transition p:go(), its main protocol which is shown on the left side of the snapshot (c). On such protocol the transition :transmit() produces a performative containing the first disassembled packet “this”, 1 that is directed over the transition p:send() on the main producer agent net; subsequently the protocol is blocked waiting for an acknowledgment message. The blocking behavior is necessary to simulate a synchronous communication between the producer and the consumer. The packet is received on the main consumer agent net by means of the transition :receive(msg, i) which enables reactively the transition c:getMessage(msg,i) and subsequently the instantiation of the consumer main protocol shown on the right side of the snapshot (c). The consumer can now acknowledge the reception, consume the item and wait for another item. The arriving acknowledgment to p:ack(j) transition enables the transition :ack(j) on the producer agent protocol. After occurrence of this transition the protocol is not blocked any further and the producer agent is now able to reproduce and send another packet.
11430014605(a)
(b)
(c)
00(a)
(b)
(c)

Fig. SEQ “Figure” * MERGEFORMAT 6. The modeling phase: The abstract test model
The difference between this proposal and a visualization tool that shows some activities of a program running in background is twofold.
Whereas a normal modeling process requires at least three stages to reach an executable program: (a) model the system, (b) implement the model and (c) write the visualization for the program, using the nets within nets paradigm, the modeling process concludes with a running system model.

The visualization of the system model at execution time is indeed the implementation of the system. This eliminates several potential sources of errors shifting from model to implementation to visualization in an ordinary software design process.
Nevertheless, our model is derived manually from the system specification. This is why model validation (by simulation) should be done first. Renew relies entirely on simulation to explore the properties of a net, where tester (the modeler) can dynamically and interactively explore the state of the simulation. In fact, at this stage tester can imagine different execution scenarios. In other words, while simulating the system net, the tester has the opportunity to correct and to refine progressively the abstract test model. Such robust and easy to use tool reduces considerably the large initial effort in term of man-hours required mainly in constructing and validating the testing model.

Fig. SEQ “Figure” * MERGEFORMAT 7. Results of static analysis
The next step is the test case generation and their concretization. But before this a preparative work must be done first. Concerning the application under test, information is extracted from the agent’s internal structures by parsing them (static analysis). This information includes such details as the identification of agents, their plans, the names of its classes together with the names of each of member methods, their parameters, and their types. That information is then stored in a so-called application information file.

Concerning our abstract test model, all nets are exported, thanks to renew, to an XML format or more accurately into a PNML RefNet format (Petri Net Markup Language 23). PNML is designed to be a Petri net interchange format that is independent of specific tools, platforms and supports different dialects of Petri nets. With renew, the drawing is saved as a Reference net. Graphical figures without semantic meaning (e.g. those figures produced by the drawing tool bar) are omitted. Figure 7 shows the results of the static analysis related to our case study. It suffices for the automation architect to select the respective directory of the test model and the system under test and our tool displays the PNML RefNet file for every net (on the left hand side of the figure) and the Java code (on the right hand side of the figure) of the corresponding agents. Note that our example has been developed using the Java Agent DEvelopment Framework (JADE). JADE is a software Framework fully implemented in the Java language. It simplifies the implementation of multi-agent systems through a middle-ware that complies with the FIPA specifications.
At this point, the experience of the tester is used to find the components of software where defects might be present and, in function, to suggest a testing scenario to check. Such a technique, called error guessing design strategy, is as important as other testing techniques because it is intended to compensate for the inherent incompleteness of those techniques. Range of techniques can be used to design such a scenario:
Knowledge about the system under test, such as the design method or implementation technology
Knowledge of the results of any earlier testing phases (particularly important in Regression Testing)
Experience of testing similar or related systems (and knowing where defects have arisen previously in those systems)
Knowledge of typical implementation errors
General testing rules
Once all inputs (application information file, model information file and the scenario of test) are available, the test tool proceeds to the instrumentation of scenario of test suggested by the tester. This is done automatically by adding of a certain set of specific routines in model instead of making changes in the system under test.

Fig. SEQ “Figure” * MERGEFORMAT 8. Instrumentation of model
Roughly speaking, the instrumentation is done after the human tester selects methods belonging to the scenario he wants to check from the system information file. The test tool then inserts each method in the most suitable transition depicted in the model information file as shown in Figure 8. Actually, those instruments are inserted in the corresponding PNML file. The output is then a new version of the testing model where the system under test is leaved unchanged.

127000160782000Saving and then reopening the modified PNML files results in an instrumented version of the nets drawing. Fig. 9 shows such instrumented version of the model where all added instruments are surrounded in red. The reader can easily compare the original figures of the model with the figures shown in Fig. 6. Because this operation (instrumentation) is conducted automatically it significantly reduces the testing effort and let unchanged the initial execution time due system under test. Note that instrumenting the model does not infringe the initial testing model because reference nets are themselves Java objects. Making calls from Java code to net is just as easy as to make calls from nets to Java code.

Fig. SEQ “Figure” * MERGEFORMAT 9. Instrumented version of model
Now, the instrumented version of the model under test is ready for execution with the desired test case. The results obtained following the execution of a sequence of test will be compared to the expected results and a verdict is constituted. Fig.10 shows the execution of producer consumer example.

11430018224500(a)(b)
Fig. SEQ “Figure” * MERGEFORMAT 10. Execution and evaluation of results.

The execution of instrumented model makes calls to the application functions concerned by this test case. If test passes, the execution of model continues to a new place or transition as shown in snapshot (a). Otherwise, an error message is displayed in the command line as shown in snapshot (b). Experience shows that failures that occur when the tests are run are likely to be due to errors in the model or errors in the implementation. So the process of model-based testing provides useful feedback and error detection for the requirements and the model as well as the system under test.

Conclusion and future work
In this paper we presented a model based testing approach for multi agent system using the paradigm of reference nets. We have used the producer consumer example as a case study.

An important argument for using the paradigm of nets within nets is that the modeling process concludes not only with a running system model, but also because it supports the whole steps of the model testing technique. This reduces greatly the major negative aspect of model based testing techniques and support testers in creating and executing tests in a uniform and automatic way.
The paper represents an evolution compared to previous works; first the scope of our approach touches all test levels (from the system level to the algorithmic level). Besides we that by adopting an error guessing technique the number of test cases will be reduced. Moreover we have automated the concretization stage by instrumenting the model instead the application under test. Finally and to validate our approach we have developed a test tool that supports all phases of the model based testing technique: modeling, validation, generation, concretization and execution. The test results generated by the test monitor facilitate the construction of a verdict about the application under test. Therefore our approach can be a valid support for these kinds of test.

Nevertheless, the case study has focused on a simple form of agent cooperation. It is in our intention to build models for a number of sophisticated multi agent coordination.
References
1. J. Ferber and O. Gutknecht, Aladdin: a meta-model for the analysis and design of organization in multi-agent systems In ICMAS’98, 1998
2. Y. Kissoum and Z. Sahnoun, Formal specification and testing of multi agent systems, in 8`eme Colloque Africain sur la Recherche en Informatique (CARI’06), 2006
3. O. Kummer, Simulating synchronous channels and net instances. Workshop Algorithmen und Werkzeuge für Petrinetze 1998, 73–78
4. R. Valk, Petri nets as token objects: An introduction to elementary object nets, Application and Theory of Petri Nets, volume 1420 of LNCS, pages 1–25, 1998
5. Y. Kissoum and Z. Sahnoun, A formal approach for functional and structural test cases generation in multi agent systems, in 5th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA’07), 2007
6. E. Gamma, and K. Beck, “JUnit: A Regression Testing Framework”, http://www.junit.org, 2000
7. O. Dikenelli, R. Erdur, and O. Gumus, “Seagent: a platform for developing semantic webbased multi agent systems”, AAMAS’05 Proceedings of the fourth International Joint Conference on Autonomous agents and multi-agent systems, ACM Press, New York, pp. 1271–1272, 2005
8. R. Coelho, U. Kulesza, A. Staa, and C. Lucena, “Unit testing in multi-agent systems using mock agents and aspects”, Proceedings of the international workshop on Software engineering for large-scale multi-agent systems, ACM Press, New York, pp. 83–90, 2006
9. Z. Houhamdi, “Test Suite Generation Process for Agent Testing”, Indian Journal of Computer Science and Engineering, Vol. 2, 2, 2011
10. A. Tiryaki, S. Oztuna, O. Dikenelli, and R. Erdur, “SUnit: A unit testing framework for test driven development of multi-agent systems”, AOSE’06 Proceedings of the 7th International Workshop on Agent-Oriented Software Engineering VII, Springer, Berlin, pp. 156-173, 2007
11. L. Padgham, M. Winikoff, and D. Poutakidis, “Adding debugging support to the Prometheus methodology”, Engineering Applications of Artificial Intelligence, Vol. 18, 2, pp. 173-190, 2005
12. L. Rodrigues, G. Carvalho, P. Barros, and C. Lucena, “Towards an integration test architecture for open MAS”, 1st Workshop on Software Engineering for Agent-Oriented Systems/SBES. pp. 60-66. 2005
13. De Wolf, T., Samaey, G., Holvoet, T.: Engineering self-organising emergent systems with simulation-based scientific analysis. 2005
14. Z. Houhamdi, and B. Athamena, “Structured System Test Suite Generation Process for Multi-Agent System”, International Journal on Computer Science and Engineering, Vol.3, 4, pp.1681-1688, 2011
15. Http://www.cs.waikato.ac.nz/~marku/mbt/modeljunit/,last accessed on 15th March 15, 2009
16. O. Kummer and F.Wienberg, Reference net workshop (Renew), Available at www.renew.de17. M. Köhler, D. Moldt, and H. Rölke, Modeling the structure and behavior of petri net agents, J.M. ICATPN 2001, LNCS 2075, 2001, 224–241
18. M. Köhler, D. Moldt and H. Rölke, Modelling mobility and mobile agents using nets within nets, ICATPN 2003, LNCS 2679, 2003, 121–139
19. M. Duvigneau, D. Moldt and H. Rölke, Concurrent architecture for a multi-agent platform, in Proceedings of the Workshop on Agent Oriented Software Engineering (AOSE’02), LNCS 2585, Springer Verlag, 2003
20. G.J. Myers, The art of software testing, Wisley 2nd Ed. 2004
21. W. Prenninger, M. ElRamly and M. Horstmann, Model based testing of reactive systems, in Springer Berlin/Heidelberg Eds, LNCS 3472, 2005, 439–461
22. Farida Ali Guechi, Ramdane Maamri, Yacine Kissoum, ”A Model Based Testing Tool for Mobile Agents”, (IC2IN’13), Fès, Maroc, 13-14 November 2013.