Sunday, July 31, 2016

CFP: 2017 Australasian Computer Science Week (ACSW 2017)


2017 Australasian Computer Science Week (ACSW 2017)
Geelong, Victoria, Australia (only 70 km away from Melbourne City)
January 31 – February 3, 2017

Paper submission : August 8, 2016
Author notification: October 17, 2016
Camera-ready full papers: November 7, 2016.


Main Features:

*** Industry track for research collaborations
*** World class keynote speakers
*** A number of Special Issues
*** A flagship annual conference in Oceania region
*** Student travel grants
*** Close to Australia Open and Great Ocean Road
The Australasian Computer Science Week (ACSW) is the premier event for Computer Science researchers in Australasia that is organised by the Computing Research and Education (CORE) Association of Australasia .
ACSW consists of several conferences covering a wide range of topics in Computer Science and related area. This conference is attended by many national and international delegates comprising HDR students and distinguished academics from all over the world in computer science. The conference week has been running in some form continuously since 1978. This makes it one of the longest running conferences in Computer Science.
Authors are invited to submit papers that present original and unpublished research on topics directly to the relevant conference.
As with previous years, registration for ACSW will enable delegates to attend sessions in any conference participating or co-located in the Australasian Computer Science Week.
The proceedings of this event is aimed to be published by the ACM Digital Library

List of ACSW 2017 satellite conferences (more details can be found at the ACSW conference website)

1.     The 39th Australasian Computer Science Conference (ACSC 2017)
2.     The Twelfth Asia-Pacific Conference on Conceptual Modelling (APCCM 2017)
3.     Eighteenth Australasian Computing Education Conference (ACE 2017)
4.     Australasian Information Security Conference (AISC 2017)
5.     14th Australasian Symposium on Parallel and Distributed Computing (AusPDC 2017)
6.     17th Australasian User Interface Conference (AUIC)
7.     Australasian Web Conference (AWC) 2017
8.     Interactive Entertainment 2017
9.     Australasian Computing Doctoral Consortium (ACDC17)
10.  Australasian Early Career Researchers Workshop (AECRW)
11.  Australasian Workshop on Health Informatics and Knowledge Management

Special Issues

Selected high quality ACSW 2017 submissions will be invited to perform solid extend to submit to the following Special Issues

1.     International Journal of Computers and Applications, Special Issue for ACSW17 (confirmed).
2.     IEEE Transactions on Big Data, Foundations for Big Data Security and Privacy (confirmed)
3.     Future Generation Computer Systems, Social networking Big Data (confirmed)
4.     More Special Issue will be updated at the conference website

Enquires for ACSW 2017, please contact the executive general chair, Dr Shui Yu, email: For details about a specific satellite conference, please contact the TPC chairs, respectively.

Saturday, May 14, 2016

CFP: ReVISE'16 - Requirements for Visualizations in Systems Engineering

Call for papers: ReVISE'16 - Requirements for Visualizations in Systems

Workshop on the Requirements Engineering Conference RE'16 in Beijing,

Submissions until: June 20, 2016
Notification to authors: July 8, 2016
Camera ready version: July 24, 2016
Workshop: Sept 13, 2016

Visual knowledge representations and data visualizations form a
particular kind of information systems in their own right, which deserve
a high degree of scientific interest. Information systems for
visualization are, e. g., analytical diagrams embedded into user
interfaces, model editors and domain-specific model visualizations,
dashboards, and interactive info-graphics. These kinds of systems are
characterized by specific functionalities that come with their own class
of requirements.

Possible research directions for submissions to the workshop include,
but are not restricted to:
- How can information demands towards visualizations be expressed as
part of a system engineering procedure?
- How can it be methodically ensured that visualizations are understood
unambiguously by different people?
- How can appropriate visualization types for the support of specific
system engineering tasks be systematically identified?
- How do domain-specific software-development procedures look like in
which visualizations are created as part of a model-driven visualization
(MDV) process?
- In which way do different cultural backgrounds of visualization users
potentially influence the specification of requirements towards

Additionally, if your paper addresses one or more of the following
topics, please consider to submit it:
- Analysis of the quality and efficacy of visualizations
- Notations and symbols in conceptual models
- Design concepts for interactive visualizations
- Evaluation and improvement of existing visualization techniques
- Cognitive aspects of communicating knowledge via visualizations
- Use of models and visual notations in practice
- Innovative interface concepts for user interaction with software
- Software-supported creation and use of information graphics
- Tool support for creating interactive visualizations
- Use of visualization in business process modeling
- Use of visualizations in collaborative settings
- Teaching and training of visualization design and use

Workshop Format:
The 1-day workshop will consist of a keynote, paper presentations for
full and short papers, as well as a demo-oriented session where recent
applications and prototypes displaying novel ideas in visualization
research are showcased. Each demo will be introduced in a short
presentation, and then demonstrated live with the running software.

The following types of submissions will be accepted:
- Full papers, up to 10 pages
- Short papers (work in progress, research agendas, industry reports),
up to 6 pages
- Demo papers (demos, prototypes), 2 to 4 pages

Please upload your submission at
formatting style at
to format your work.

Each submission will be peer-reviewed by at least 3 members of the
program committee. Based on the reviews and review scores the organizing
committee will make a selection of papers to be accepted for
publication. The workshop proceedings are planned to be published in the
IEEE digital library.

Submissions for the demo track do not need to fulfill the same degree of
scientific justification as paper submissions, and do not have to
explicitly address individual research questions. In turn, demo
submissions are required to be highly innovative and distinctively
creative compared to the state-of-the-art of existing approaches and

Program Committee:
Craig Anslow, Middlesex University London
Ross Brown, Queensland University of Technology
Sepideh Ghanavati, Carnegie Mellon University; Luxembourg Institute of
Science & Technology
Miguel Goulaõ, Universidade Nova de Lisboa
Irit Hadar, University of Haifa
Dimitris Karagiannis, University of Vienna
Sybren de Kinderen, University of Duisburg-Essen
Simone Kriglstein, University of Vienna
Meira Levy, Shenkar College of Engineering and Design, Ramat Gan
Alexander Nolte, Ruhr-Universität Bochum
Erik Proper, Radboud University; Luxembourg Institute of Science &
Hajo Reijers, VU University Amsterdam
Pnina Soffer, University of Haifa
Jean-Sébastien Sottet, Luxembourg Institute of Science & Technology
Stefan Strecker, FernUniversität in Hagen
Barbara Weber, University of Innsbruck
William Wong, Middlesex University London

Jens Gulden, University of Duisburg-Essen, Information Systems and
Enterprise Modeling, Universitätsstr. 9, 45141 Essen, Germany, Tel: +49
201 183-2719,

Dirk van der Linden, University of Haifa, Department of Information
Systems, Mount Carmel, Haifa 31905, Israel, Tel: +972 4 8288366,

Banu Aysolmaz, VU University of Amsterdam, Business Informatics Group,
De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands, Tel: +31 20 59

Sunday, May 8, 2016

Wednesday, April 13, 2016

Paper: Augmenting process elicitation with visual priming: An empirical exploration of user behaviour and modelling outcomes

We have just recently published a new journal paper in the Information Systems Journal on our virtual worlds elicitation work in BPM - "Augmenting process elicitation with visual priming: An empirical exploration of user behaviour and modelling outcomes."

QUT Eprints is here, with Journal DOI.  The work was performed by my PhD student Joel Harman along with my collaborators at QUT, Metasonic and University of Vienna.

Well done Joel!


Business process models have become an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach to process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions similarly to how they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. An empirical investigation comparing both the modelling outputs and participant behaviour of this virtual world role-play elicitor with an S-BPM process modelling tool found that while the modelling approaches of the two groups varied greatly, the virtual world elicitor may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.

Monday, March 21, 2016

1st CFP: TAProViz’16 5th International Workshop on Theory and Application of Visualizations and Human-centric Aspects in Processes

TAProViz’16 5th International Workshop on Theory and Application of Visualizations and Human-centric Aspects in Processes

Rio de Janeiro, Brazil - 19 September 2016 

In conjunction with the 14th International Conference on Business Process Management, BPM2016

Call for Papers

Visualizations can make the structure and dependencies between elements in processes accessible in order to support users who need to analyze process models and their instances.  
However, effectively visualizing processes in a user-friendly way is often a big challenge, especially for complex process models which can consist of hundreds of process components (e.g., process activities, data flows, and resources) and thousands of running process instances in different execution states. 

Many challenges remain to be addressed within the broad area of process visualization, human interaction and user led design such as: scalability, human-computer interaction, cognitive aspects, applicability of different approaches, collaboration, process evolution, run-time requirements of process instances and applications, user-engagement etc.  

Topics of interest include (but are not limited to): 

Visual Metaphors in Processes 
Visual Design and Aesthetics for Processes 
Visualization of Dynamic Data in Processes 
Change Visualization for Processes 
Interface and Interaction Techniques for Process Visualization  
Visualization Techniques for Collaboration and Distributed Processes  
Visualization of Large-scale Processes 
Cognition and Perception in Process Visualization 
Evaluation and User Studies of Process Visualization 
Evaluation Methods for Human Aspects in PAIS 
Visual Modeling Languages 
Analysis Techniques and Visualization for Processes 
Process Visualization of Large Screens 
Mobile Process Visualization 
Visualization Tools and Systems for Processes 
Visualization Techniques for Processes 
Process Visualization and Sonification 
Virtual World Process Visualization 
Immersive Process Modeling Approaches 
Human Computer Interaction Design Applied to Process Systems 
3D Process Visualization Approaches 
Human-centric aspects in business process management 
User-centered design for BPM 
User Interface design for Processes 

Format of the Workshop
The half day workshop will comprise accepted papers and tool evaluations. Papers should be submitted in advance and will be reviewed by at least three members of the program committee.  

This year will also include a new innovation in the programme. Part of the workshop time (depending on the number of prototype submissions) will be set aside for focus group assessments of tools. We will be requesting tool report authors, successful workshop paper authors and panel members attending BPM, to assist in the assessment of demonstration visualization techniques and software. This evaluation process will be a service to attendees, as these heuristic assessments can be written up later as separate papers, or by the workshop chairs as an aggregated workshop outcome. Such evaluations will be an exciting addition to the workshop, as people experienced in Information Visualization, BPM, HCI and related fields, will provide detailed feedback on your prototypes. The evaluation approach is largely in the hands of the tool report writers, but at a minimum, should involve direct interaction with your software and some form of validation via a questionnaire. 

All accepted papers will appear in the workshop proceedings published by Springer in the Lecture Notes in Business Information Processing (LNBIP) series. There will be a single LNBIP volume dedicated to the proceedings of all BPM workshops. As this volume will appear after the conference, there will be informal proceedings during the workshop. At least one author for each accepted paper should register for the workshop and present the paper.  

Important Dates
Deadline for workshop paper submissions: 27 May 2016 
Notification of Acceptance: 27 June 2016 
Camera­‐ready version: 18 July 2016 
TAProViz Workshop: 19 September 2016 
Paper Submission
Prospective authors are invited to submit papers for presentation in any of the areas listed above. 

Three types of submissions are possible:  

(1) full papers (12 pages long) reporting mature research results  
(2) position papers reporting research that may be in preliminary stage that has not yet been evaluated  
(3) tool reports, to be evaluated at the workshop 

Position papers and tool reports should be no longer than 6 pages. Tool reports should include a brief evaluation plan as an appendix, for the evaluation session at the workshop on the day. 

Only papers in English will be accepted and must present original research contributions not concurrently submitted elsewhere. Papers should be submitted in the « LNBIP » format. The title page must contain a short abstract, a classification of the topics covered, preferably using the list of topics above, and an indication of the submission category (regular paper/position paper/tool report). 

All accepted workshop papers will be published by Springer as a post-workshop proceedings volume in the series Lecture Notes in Business Information Processing (LNBIP). Hard copies of these proceedings will be shipped to all registered participants approximately four months after the workshops, while preliminary proceedings will be distributed during the workshop. 

Submitted papers will be evaluated, in a double blind manner, on the basis of significance, originality, technical quality, and exposition. Papers should clearly establish their research contribution and the relation to the theory and application of process visualization. 

Papers (in PDF format) should be submitted electronically via the «EasyChair»  

Accepted papers imply that at least one of the authors will register for the «(BPM2016) » and present the paper at the TAProViz workshop. 

Further workshop information is available from the website:

Hope to see you at TAProViz'16!

Thanks and best regards,

Ross Brown
Simone Kriglstein
Stefanie Rinderle-Ma

TAProViz Organising Committee

Tuesday, November 24, 2015

Paper: TAProViz 2015 Proceedings!

Just had our TAProViz 2015 workshop from BPM2015 proceedings preface sent off to Springer for printing.

PDF available here.


Sunday, November 8, 2015

Paper Review: Notes from "Using thematic analysis in psychology"

Using thematic analysis in psychology

Virginia Braun & Victoria Clarke

Qualitative Research in Psychology


Thematic analysis is a poorly demarcated, rarely acknowledged, yet widely used qualitative analytic method within psychology. In this paper, we argue that it offers an accessible and theoretically flexible approach to analysing qualitative data. We outline what thematic analysis is, locating it in relation to other qualitative analytic methods that search for themes or patterns, and in relation to different epistemological and ontological positions. We then provide clear guidelines to those wanting to start thematic analysis, or conduct it in a more deliberate and rigorous way, and consider potential pitfalls in conducting thematic analysis. Finally, we outline the disadvantages and advantages of thematic analysis. We conclude by advocating thematic analysis as a useful and flexible method for qualitative research in and beyond psychology. 


Useful paper to plug the gap of the absence of an introductory paper on thematic analysis for non-qualitative researchers.  While I do have problems with this as a technique in contrast to quantitative methods that I have usually used in either software engineering, computer graphics or otherwise, it does enable contextualisation of the thoughts of participants in user studies.

Reading starts here:

Seems to be a lack of consensus on the placement of thematic analysis, is or is not a specific method.  The authors argue it is a specific method.

They argue that it is to be well defined in this paper, without restricting its flexibility.

Corpus refers to all data, while data set is the data to be used in analysis.  

Key Point - "Thematic analysis is a method for identifying, analysing and reporting patterns (themes) within data." Obvious, but it has to be stated.

They claim that it is a method of analysis, and that other named approaches are essentially still a form of thematic analysis.

They comment that themes should not emerge; this is a passive approach, denying the place of the researcher.  They say that themes reside in our head from our thinking about the data.  ME: I wonder if it is a product of an interaction with the data, which probably puts me into a social constructivist position.  Does the data turn into an entity in my phenomenological experience.  No, that is too silly.  But, one could imagine that I am giving the data some agency in some way.  Something to think about...

"What counts as a theme? A theme captures something important about the data in relation to the research question, and represents some level of patterned response or meaning within the data set."

They suggest there is no hard percentage of data that establishes a theme; ME: can this get any more vague?  This suggests no standard computational component to thematic analysis.  Though they suggest prevalence as a heuristic approach.  I guess one instance is disproving evidence of the absence of that instance in the rest of the cohort.

"Part of the flexibility of thematic analysis is that it allows you to determine themes (and prevalence) in a number of ways. What is important is that you are consistent in how you do this within any particular analysis."

ME: I am struggling here.  How do you take a measurement without a clear comparable metric that generalises to other cohorts?  While I get the primacy of consistency, one can be consistently wrong as well.

They then compare and contrast the top down theoretical approaches to bottom up inductive processes.  Inductive is considered richer, top down is more detailed analysis of a particular theme.

Semantic or latent themes.  Semantic is more of an analysis of the themes present in the data prima facie, while latent analysis looks for underlying ideas from the themes; thus is more interpretive.  They also note it is more constructivist, but is not necessarily completely constructivist. ME: man, my poor quantitative brain is struggling a little here. :-)  while I get that phenomenological outcomes are going to be inexact and dynamic, I am finding this a little hard going. 

"Those approaches which consider specific aspects, latent themes and are constructionist tend to often cluster together, while those that consider meanings across the whole data set, semantic themes, and are realist, often cluster together."

There are 6 guidelines (not rules, of course! :-))  With some comments or extracts from their descriptions of these phases.

1. Familiarizing yourself with your data:
2. Generating initial codes:
3. Searching for themes:
4. Reviewing themes:
5. Defining and naming themes:
6. Producing the report:

Phase 1

They recommend re-reading data.  Mark down codes now, as part of an interpretive process.  Make sure not to process transcriptions, as punctuation can matter at this stage.   Interestingly they suggest to personally transcribe (my IS school hires people) in order to develop deep reading skills.  Good point.

Phase 2

"remember that you can code individual extracts of data in as many different ‘themes’ as they fit into 􏰀/ so an extract may be uncoded, coded once, or coded many times, as relevant."  ME: This seems to be an m-to-m mapping for themes.  I would need an example, but if many themes use elements that reappear, then that must be problematic for identification of themes as entities.  Just seems too incoherent.

Phase 3

Key point, do not throw out any themes, just keep them in the basket for later.

Phase 4 

Make sure your themes fit all the data; which to me is almost impossible, but may work as an inexact process.

Stop refining when there is no new contribution to the themes.  ME: Almost a embedded grounded theory exercise.

Phase 5

The key point to take home is the need for the themes to be an essential description of the data, with a punchy and clear title for communicating the ideas.

Phase 6

The report provides sufficient evidence of the presence of the themes within the data set analysed.

Needs to go beyond description to make an argument about the data analysed.  ME: Key point here.

Common Problems

Lack of actual analysis - obvious, but I have seen it a lot in tool evaluations.  Maybe because the response to the tool is not expected to be that complex?

"The third is a weak or unconvincing analysis, where the themes do not appear to work, where there is too much overlap between themes, or where the themes are not internally coherent and consistent."


"‘anecdotalism’ in qualitative research 􏰀/ where one or a few instances of a phenomenon are reified into a pattern or theme, when it or they are actually idiosyncratic."

Consider alternative thematic explanations in the analysis, to show deep insight into the data and its context.

"One of the criticisms of qualitative research from those outside the field is the perception that ‘anything goes’." ME: The authors have insight! :-) 

As can be seen by my comments, I have some skepticism about the results of such thematic analysis.  However, I do give it validity for its ability to contextualise quantitative results.  I aim to do that with more rigour in the future in my research, as my work usually involves software tool analysis.