The creation of a process model faces the challenge of constructing a syntactically correct entity which accurately reflects the semantics of the reality, and is understandable. This paper proposes a framework called Model Judge , focused towards the two main actors in the process of learning process model creation: novice modellers and instructors. For modellers, the platform enables the automatic validation of the process models created from the textual description, providing explanations about quality issues in the model. Model Judge can provide diagnostics regarding model structure, writing style, and seman- tics by aligning annotated textual descriptions to models. For instructors, the platform facilitates the creation of modelling exercises by providing an editor to annotate the main parts of a textual description, that is empowered with Natural Language Processing (NLP) capabilities so that the annotation effort is minimized. So far around 300 students, in process modelling courses of five different universities around the world have used the platform. The feedback gathered from some of these courses shows good potential in helping students to improve their learning experience, which might, in turn, impact process model quality and understandability. Moreover, our results show that instructors can benefit from getting insights into the evolution of modeling processes including arising quality issues of single students, but also discover tendencies in groups of students. Although the framework has been applied to process model creation, it could be extrapolated to other contexts where the creation of models based on a textual description plays an important role.
Published in IEEE Transactions on Learning Technologies. In press.