Welcome to the Model Interchange Wiki, the public portal for the Model Interchange Working Group (MIWG). The MIWG is chartered to:
Beginning in December 2008, the MIWG has defined a test suite of 48 test cases to demonstrate interchange of UML, SysML, SoaML and UPDM models between different modeling tools. Participating tool vendors have agreed to publicly post XMI exports from their tools for the MIWG test cases. The objective is to enable the public at large to assess model interchange capability of the modeling tools by comparing the vendor XMI exports to the expected reference XMI file for each test case. These assessments may be used for a variety of purposes, including:
The test suite and guidance for how to assess interchange capability using this test suite are summarized below. Please send any questions or requests for further information to miwg-info@omg.org.
Vendor | Point of Contact | Tool |
---|---|---|
PTC (Atego) | Simon Moore | Artisan® Studio |
IBM | RSx | |
IBM/Sodius | Eldad Palachi/Mickael Albert | IBM Rhapsody |
No Magic | Nerijus Jankevicius | MagicDraw |
SOFTEAM | Etienne Brosse | Modelio |
Sparx Systems | J. D. Baker | Enterprise Architect |
The MIWG Test Suite currently consists of 48 test cases. Thirty of these are for UML 2.4.1 (twenty-five have UML 2.3 versions), some of which are also applicable to SysML, thirteen are for SysML 1.3 (ten have SysML 1.2 versions), one is for SoaML 1.0.1 and four are for UPDM 2.0.1. These test cases cover approximately 94% of UML metaclasses (for UML 2.4.1) and 100% of SysML stereotypes (for SysML 1.3, not including deprecated stereotypes). (For a spreadsheet showing detailing this coverage, click here.)
Each test case consists of one or more diagrams and a corresponding reference “valid XMI” file for the model represented in the diagrams (for Test Case 3 and 19 there are two XMI files). All XMI conforms to either v2.1 of the XMI specification (for UML 2.3, SysML 1.2, SoaML 1.0.1 and UPDM 2.0.1) or v2.4.1 (for UML 2.4.1 and SysML 1.3).
(* These test cases are also applicable to SysML.)
To exercise a MIWG test case, a tool vendor creates a model in their tool by reproducing the diagrams for the test case. The vendor then uses the capabilities of the tool to export the model to an XMI 2.1 file, which is submitted as the result of the test for that tool.
Each of the participating MIWG vendors have submitted results from the latest version of their tools for all the test cases that their tools support. These test submissions are maintained in a Subversion repository and may be updated from time to time to reflect new releases of the vendor tools. They can be used in two ways:
The public vendor test submission repository is available here. Log in using the user name guest with password guest.
The repository has two directories, UML2.3-XMI2.1 for UML tests and SysML1.2-XMI2.1 for SysML tests. Within these directories, there are subdirectories for each test case, with the following content:
You can use the Validator tool from NIST in order to assess the XMI exported from a tool against a specific MIWG test case. For a participating MIWG vendor, you can download submitted test results from the public MIWG repository, as discussed above. For other vendors, you can request that they generate appropriate test results for you to assess. (Or you can encourage them to participate in the MIWG!)
To assess model interchange using the Validator, do the following:
The Validator will then report both on the correctness of the submitted XMI file as a representation of a UML or SysML model and on how it compares with the valid XMI for the test case.
Technically, the results from the Validator do not assess interoperability directly. Instead, they assess the conformance of the export from the tool to XMI and other OMG standards used. However, it is such conformance – that is, the correct implementation of these standards – that provides the basis for interoperability.
Unlike other test cases, Test Case 3 has two XMI files. One contains a profile and the other contains a model with that profile applied. A test submission for Test Case 3 should therefore also have two corresponding exported XMI files, each of which may be separately assessed against the appropriate valid XMI file using the Validator. However, in order for the Validator to properly handle the profile application, the exported XMI for the Test Case 3 profile has to be “loaded” before the exported XMI for the Test Case 3 model is assessed. Do this as follows:
Once this is done, you can then process the Test Case 3 model XMI as described previously.
For the XMI comparison, the Validator uses the canonical XMI version of the valid XMI test case. The XMI standard allows a number of different options and variability in how a model may be represented in XMI (for example, whether a property is represented as an XML attribute or element, or how XMI IDs are formed). Canonical XMI is an additional conformance point for the XMI specification that eliminates variability in generating the XMI for a model. So, there is only one way in which any model may be correctly represented in canonical XMI. This makes it much simpler to test whether two canonical XMI files represent the same model and, if not, what their differences are.
At this time, the tools that import canonical XMI include Atego Artisan Studio, and IBM RSx. No tools export canonical XMI directly.
Currently, the NIST Validator tools are targeted for use by the tool vendor participants of the MIWG – the tools are focused on helping the vendors fix problems in interoperability. Due to this focus, it isn't always easy for the typical tool user to determine, through use of the Validator, the prospects for interoperability of tools he is considering. To address the needs of typical tool users, we have begun development of a new method of reporting results from interoperability testing.
The new method will reduce the effort required to run tests and it will provide a very different perspective on the results of testing. In the new method, a user will select a tool from among those participating in the MIWG, and receive a report summarizing the interoperability concerns identified in the testing of that tool. The report, which will be in a spreadsheet format, will provide a page of information for each of the (currently) 16 tests of the MIWG test suite. Each page will provide the following information: