RE12 Keynote Talk by Stephen Fickas
Mobile and Agile: Why can't they get along?
Steve Fickas, University of Oregon (firstname.lastname@example.org)
My group has had success using requirements elicitation
methods as part of an agile process. For us, incremental
requirements engineering fits well with incremental design. However,
we have hit a roadblock when the focus turns to mobile applications;
it is unrealistic to attempt frequent incremental field-tests. So it would
seem that mobile and agile are at odds.
In this talk I will discuss an attempt to break out of the field-test
bind for mobile applications. We have begun to look at simulation
technology that has potential to support frequent and incremental
changes without the headache of going to the field. Our simulation
starting-point is relatively cheap and simple game engines. We have
made some progress that I will report on. I will also place this work in
the larger context of past research on specification and requirements
The talk is packaged as a screencast and broken into 3 separate screencast clips. A TOC is available (see comment on first video) for quickly moving through the slides:
You may need to scroll down on the page to see the 3 separate clips.
The pdf version of the slides is available as well:
PDF slides. Unlike the screencast clips, the slides do not include any embedded videos.
I wrote down the following questions from Q/A section that followed the talk:
- How realistic does the simulation have to be? There is some evidence
that making a simulation overly realistic might be counter-productive.
See RE'03 (Mavin & Maiden).
Answer: we are trying to fool a native app with sensor data it views
as real. For instance, some students are working on a where-am-I app that allows
the user to take a picture of a landmark for back-end processing. We would like
to test this app in our hybrid-simulation. Most straightforward is to take
a picture of the current Unity scene. I think this needs to be realistic
enough to fool the image analysis algorithms. Ditto for ambient noise
and accelerometer data. Perhaps a mismatch between our agile needs and
the requirements elicitation in the Mavin and Maiden paper?
- When do you shift from simulation to the field?
Answer: In the ideal case, we could carry out all testing in the
simulation. But that has not been our approach to date. We are currently
using the simulation to focus mostly on distractors and environmental
conditions, both of which are hard to control for in the field. We have
also tested the timing of reminders, for instance for getting off at
the correct bus stop. However, we are still very much reliant on field
tests for physical state, e.g., being tired, thirsty, hungry, anxious.
- You noted that it is hard to
"simulate" actual responses like fear and anxiety in a "game";
events and actual consequences are not real. However,
the gaming industry has progressed far by instantiating reward systems
into their software that can get intended behavior even if it is a
"simulation" or game. Example, you dont step out in front of a car
if there is motivation to preserve your character (as example).
Wonder if you can use the same reward-system
structure that is used in games in your work.
Answer: yes, very nice idea.
- Have you thought about using Machine Learning techniques from
data logged in the simulation?
Answer: Not yet but I think this is a nice idea. It seems related
to work on using ML techniques in the field, e.g., http://dl.acm.org/citation.cfm?id=1639649.
Could similar techniques be employed in our simulation? Worth looking at.
- You are kind of assuming you know ahead of time all of the contexts you will need
to generate for testing. Maybe the game engine, itself, could be better
at generating contexts (possibly unexpected!).
Answer: I like this idea. I could even see the game engine being
driven by real data it captures from field sources.