User Tools

Site Tools


start

Model Interchange Wiki

Welcome to the Model Interchange Wiki, the public portal for the Model Interchange Working Group (MIWG). The MIWG is chartered to:

  • demonstrate model interchange among MOF-based tools that implement modeling languages such as UML, SysML and UPDM and use XMI as the interchange standard
  • identify and resolve interchange issues associated with specifications and vendor implementations
  • establish a demonstration infrastructure to support the above, including validation tools, demonstration processes and guidelines

Beginning in December 2008, the MIWG has defined a test suite of 48 test cases to demonstrate interchange of UML, SysML, SoaML and UPDM models between different modeling tools. Participating tool vendors have agreed to publicly post XMI exports from their tools for the MIWG test cases. The objective is to enable the public at large to assess model interchange capability of the modeling tools by comparing the vendor XMI exports to the expected reference XMI file for each test case. These assessments may be used for a variety of purposes, including:

  • Evaluation of the interchange capability of a particular tool as part of a tool selection process
  • Assessment of the model interchange capabilities and limitations of a tool or tools to set expectations and to develop interchange strategies for the use of the tools on a project

The test suite and guidance for how to assess interchange capability using this test suite are summarized below. Please send any questions or requests for further information to miwg-info@omg.org.

Quick Links

Press Releases

Participating Tool Vendors

Vendor Point of Contact Tool
PTC (Atego) Simon Moore Artisan® Studio
IBM RSx
IBM/Sodius Eldad Palachi/Mickael Albert IBM Rhapsody
No Magic Nerijus Jankevicius MagicDraw
SOFTEAM Etienne Brosse Modelio
Sparx Systems J. D. Baker Enterprise Architect

Other Participants

  • Peter Denno, NIST (Validator)
  • Sandy Friedenthal, SAF Consulting
  • Leonard Levine, DISA
  • Pete Rivett, Adaptive
  • Ed Seidewitz, nMeta (Model Interchange SIG Chair)

Test Suite

The MIWG Test Suite currently consists of 48 test cases. Thirty of these are for UML 2.4.1 (twenty-five have UML 2.3 versions), some of which are also applicable to SysML, thirteen are for SysML 1.3 (ten have SysML 1.2 versions), one is for SoaML 1.0.1 and four are for UPDM 2.0.1. These test cases cover approximately 94% of UML metaclasses (for UML 2.4.1) and 100% of SysML stereotypes (for SysML 1.3, not including deprecated stereotypes). (For a spreadsheet showing detailing this coverage, click here.)

Each test case consists of one or more diagrams and a corresponding reference “valid XMI” file for the model represented in the diagrams (for Test Case 3 and 19 there are two XMI files). All XMI conforms to either v2.1 of the XMI specification (for UML 2.3, SysML 1.2, SoaML 1.0.1 and UPDM 2.0.1) or v2.4.1 (for UML 2.4.1 and SysML 1.3).

UML Test Cases

(* These test cases are also applicable to SysML.)

SysML Test Cases

SoaML Test Cases

UPDM Test Cases

  • Test Case 18 (UPDM 2.0.1) - OV-2 Performer
  • Test Case 20 (UPDM 2.0.1) - CV-1 Architectural Description
  • Test Case 21 (UPDM 2.0.1) - CV-2 Capability Taxonomy
  • Test Case 22 (UPDM 2.0.1) - CV-4 Capability Dependencies
  • Test Case 40 - Reserved for future use
  • Test Case 41 - Reserved for future use
  • Test Case 42 - Reserved for future use
  • Test Case 43 - Reserved for future use

How to Use the Vendor-Provided Test Submissions

To exercise a MIWG test case, a tool vendor creates a model in their tool by reproducing the diagrams for the test case. The vendor then uses the capabilities of the tool to export the model to an XMI 2.1 file, which is submitted as the result of the test for that tool.

Each of the participating MIWG vendors have submitted results from the latest version of their tools for all the test cases that their tools support. These test submissions are maintained in a Subversion repository and may be updated from time to time to reflect new releases of the vendor tools. They can be used in two ways:

  1. You can download a vendor submission for a test case from this repository at any time and run it through the NIST Validator (as described below) in order to assess XMI conformance of the vendor's tool for this test case.
  2. You can take submitted XMI for a test case exported from one tool and attempt to import it into a different tool, in order the assess the actual ability to interchange models between those tools in the area covered by the test case.

The public vendor test submission repository is available here. Log in using the user name guest with password guest.

The repository has two directories, UML2.3-XMI2.1 for UML tests and SysML1.2-XMI2.1 for SysML tests. Within these directories, there are subdirectories for each test case, with the following content:

  • Test-Case-N/ – where N is a number (1,2,3…) of the test case
    • diagram.png – the test case diagram
    • valid.xmi – valid XMI for the test case
    • valid-canonical.xmi – canonical XMI version of the valid XMI for the test case
    • Submissions/ – A directory for vendor upload of results of Test-Case-N
      • toolnameX/ – A directory for results from a specific vendor's tool (Note that the toolname includes version information.)
        • our-diagram.png – Screen shot of vendor's implementation of diagram.png
        • our-export.xmi – Vendor's XMI corresponding to his implementation of diagram.png

How to Use the NIST Validator to Assess Model Interchange

You can use the Validator tool from NIST in order to assess the XMI exported from a tool against a specific MIWG test case. For a participating MIWG vendor, you can download submitted test results from the public MIWG repository, as discussed above. For other vendors, you can request that they generate appropriate test results for you to assess. (Or you can encourage them to participate in the MIWG!)

To assess model interchange using the Validator, do the following:

  1. Go to the Validator Web page here.
  2. Enter the path for the XMI file to be assessed, as exported from the tool being tested.
  3. Select the MIWG test case to which it is to be compared.
  4. Click on “Upload and Process”.

The Validator will then report both on the correctness of the submitted XMI file as a representation of a UML or SysML model and on how it compares with the valid XMI for the test case.

Technically, the results from the Validator do not assess interoperability directly. Instead, they assess the conformance of the export from the tool to XMI and other OMG standards used. However, it is such conformance – that is, the correct implementation of these standards – that provides the basis for interoperability.

Special Instructions for Test Case 3

Unlike other test cases, Test Case 3 has two XMI files. One contains a profile and the other contains a model with that profile applied. A test submission for Test Case 3 should therefore also have two corresponding exported XMI files, each of which may be separately assessed against the appropriate valid XMI file using the Validator. However, in order for the Validator to properly handle the profile application, the exported XMI for the Test Case 3 profile has to be “loaded” before the exported XMI for the Test Case 3 model is assessed. Do this as follows:

  1. Go to the Validator “Load Profiles” page here.
  2. Enter valid.profile.xmi in the field “Reference using this URI”.
  3. Enter the path for the exported XMI file for the Test Case 3 profile.
  4. Click on “Upload and Process”.

Once this is done, you can then process the Test Case 3 model XMI as described previously.

Notes on XMI Comparison and Canonical XMI

For the XMI comparison, the Validator uses the canonical XMI version of the valid XMI test case. The XMI standard allows a number of different options and variability in how a model may be represented in XMI (for example, whether a property is represented as an XML attribute or element, or how XMI IDs are formed). Canonical XMI is an additional conformance point for the XMI specification that eliminates variability in generating the XMI for a model. So, there is only one way in which any model may be correctly represented in canonical XMI. This makes it much simpler to test whether two canonical XMI files represent the same model and, if not, what their differences are.

At this time, the tools that import canonical XMI include Atego Artisan Studio, and IBM RSx. No tools export canonical XMI directly.

Interoperability Reports for Tool Users

Currently, the NIST Validator tools are targeted for use by the tool vendor participants of the MIWG – the tools are focused on helping the vendors fix problems in interoperability. Due to this focus, it isn't always easy for the typical tool user to determine, through use of the Validator, the prospects for interoperability of tools he is considering. To address the needs of typical tool users, we have begun development of a new method of reporting results from interoperability testing.

The new method will reduce the effort required to run tests and it will provide a very different perspective on the results of testing. In the new method, a user will select a tool from among those participating in the MIWG, and receive a report summarizing the interoperability concerns identified in the testing of that tool. The report, which will be in a spreadsheet format, will provide a page of information for each of the (currently) 16 tests of the MIWG test suite. Each page will provide the following information:

  • A description of the test domain (e.g. Class diagrams, composite structure, etc.).
  • An enumeration of the objects and their properties covered in the test case.
  • A figure depicting the diagram on which the test is based.
  • A description of the interoperablity concerns encountered, including:
    1. A description, intended to be intelligible to the typical tool user, of the concern.
    2. Identification of the object types on which the problem is occuring.
    3. An assessment of the severity of the concern (“quite significant”, “moderately significant” , “minor”)
start.txt · Last modified: 2016/12/21 10:54 by edseidewitz