Cross-platform mobile application for species observation

Transkript

Cross-platform mobile application for species observation
Cross-platform mobile application for species
observation
Customer Driven Project
Group 8
May 8, 2012
Project Name: ADB Mobile
Project Sponsor: Artsdatabanken
International: The Norwegian Biodiversity Information Center
Group members:
Anders Søbstad Rye
Andreas Berg Skomedal
Dag-Inge Aas
Muhsin Günaydin
Nikola Dorić
Stian Liknes
Yonathan Redda
Contents
1 Abstract
1
2 Introduction
3
2.1
2.2
Project description . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.1.1
The customer . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.1.2
People involved in the project . . . . . . . . . . . . . . . .
3
2.1.3
Project drivers . . . . . . . . . . . . . . . . . . . . . . . .
4
2.1.4
Problem domain . . . . . . . . . . . . . . . . . . . . . . .
4
2.1.5
Proposed solution . . . . . . . . . . . . . . . . . . . . . .
4
2.1.6
Project objective . . . . . . . . . . . . . . . . . . . . . . .
4
2.1.7
Available resources . . . . . . . . . . . . . . . . . . . . . .
5
2.1.8
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . .
5
Copyright . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
3 Project plan and management
6
3.1
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
3.2
Project phases . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
3.2.1
Planning and research . . . . . . . . . . . . . . . . . . . .
7
3.2.2
Sprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
3.2.3
Documentation . . . . . . . . . . . . . . . . . . . . . . . .
8
Group organization . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.3.1
Roles
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.3.2
Role allocation . . . . . . . . . . . . . . . . . . . . . . . .
10
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.4.1
LaTeX . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.4.2
JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
Quality assurance . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.5.1
Templates . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.5.2
Group dynamics . . . . . . . . . . . . . . . . . . . . . . .
12
3.5.3
Customer relations and meetings . . . . . . . . . . . . . .
13
3.5.4
Advisor relations and meetings . . . . . . . . . . . . . . .
13
3.6
Risk Management Framework . . . . . . . . . . . . . . . . . . . .
13
3.7
Risk analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.3
3.4
3.5
4 Preliminary study
4.1
4.2
4.3
4.4
4.5
18
Similar products . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
4.1.1
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
19
Development process . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2.1
SCRUM model . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2.2
Waterfall model
. . . . . . . . . . . . . . . . . . . . . . .
21
4.2.3
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Mobile technologies . . . . . . . . . . . . . . . . . . . . . . . . . .
22
4.3.1
Mobile platform . . . . . . . . . . . . . . . . . . . . . . .
23
4.3.2
Cross-compiling frameworks . . . . . . . . . . . . . . . . .
24
4.3.3
Native . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
4.3.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
25
Mobile development . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.4.1
Native languages . . . . . . . . . . . . . . . . . . . . . . .
25
4.4.2
HTML5, CSS3 and JavaScript . . . . . . . . . . . . . . .
25
4.4.3
jQuery Mobile . . . . . . . . . . . . . . . . . . . . . . . .
25
4.4.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
26
Testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.5.1
Testing cycle . . . . . . . . . . . . . . . . . . . . . . . . .
27
4.5.2
QUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
4.5.3
jQuery Mockjax: AJAX request mocking . . . . . . . . .
28
4.5.4
Selenium . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
4.5.5
Usability testing . . . . . . . . . . . . . . . . . . . . . . .
29
4.5.6
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4.6
Field study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4.7
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
5 Requirement specification
34
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
5.2
Requirements gathering methodology . . . . . . . . . . . . . . . .
34
5.2.1
Background study . . . . . . . . . . . . . . . . . . . . . .
34
5.2.2
Interviewing and questionnaires . . . . . . . . . . . . . . .
34
5.2.3
Observation and document sampling . . . . . . . . . . . .
35
5.2.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
General overview . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
5.3.1
The process . . . . . . . . . . . . . . . . . . . . . . . . . .
35
5.3.2
Project directive . . . . . . . . . . . . . . . . . . . . . . .
35
5.3
5.4
5.5
5.6
5.3.3
Target audience . . . . . . . . . . . . . . . . . . . . . . . .
36
5.3.4
Project scope . . . . . . . . . . . . . . . . . . . . . . . . .
36
Functional requirements . . . . . . . . . . . . . . . . . . . . . . .
36
5.4.1
Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
5.4.2
Complexity . . . . . . . . . . . . . . . . . . . . . . . . . .
37
Nonfunctional requirements . . . . . . . . . . . . . . . . . . . . .
38
5.5.1
Quality of service . . . . . . . . . . . . . . . . . . . . . . .
38
5.5.2
Compliance . . . . . . . . . . . . . . . . . . . . . . . . . .
39
5.5.3
Architectural design . . . . . . . . . . . . . . . . . . . . .
39
5.5.4
Development paradigm
. . . . . . . . . . . . . . . . . . .
39
Use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
6 System architecture
49
6.1
Layer 1 - Native platform . . . . . . . . . . . . . . . . . . . . . .
49
6.2
Layer 2 - Cross-platform framework . . . . . . . . . . . . . . . .
49
6.3
Layer 3 - Mobile app . . . . . . . . . . . . . . . . . . . . . . . . .
49
6.3.1
Data access . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6.3.2
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6.3.3
Controller . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6.3.4
View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6.4
Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
6.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
7 Sprint 0
7.1
51
Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
7.1.1
Expected results . . . . . . . . . . . . . . . . . . . . . . .
51
7.1.2
Duration . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
7.2.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
7.2.2
Choice of Template . . . . . . . . . . . . . . . . . . . . . .
51
7.2.3
Preview of the UI with Screenshots . . . . . . . . . . . . .
52
7.3
Customer feedback . . . . . . . . . . . . . . . . . . . . . . . . . .
56
7.4
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
7.2
8 Sprint 1
8.1
58
Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
8.1.1
Expected results . . . . . . . . . . . . . . . . . . . . . . .
58
8.1.2
Duration . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
8.2
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
8.3
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
8.4
Testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
8.5
Customer feedback . . . . . . . . . . . . . . . . . . . . . . . . . .
59
8.6
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
9 Sprint 2
9.1
60
Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
9.1.1
Expected results . . . . . . . . . . . . . . . . . . . . . . .
60
9.1.2
Duration . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
9.2
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
9.3
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
9.3.1
Auto-complete . . . . . . . . . . . . . . . . . . . . . . . .
61
9.3.2
Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
9.4
Testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
9.5
Customer feedback . . . . . . . . . . . . . . . . . . . . . . . . . .
63
9.6
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
10 Sprint 3
64
10.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.1.1 Expected results . . . . . . . . . . . . . . . . . . . . . . .
64
10.1.2 Duration . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.3.1 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
10.3.2 Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
10.4 Testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
10.5 Customer feedback . . . . . . . . . . . . . . . . . . . . . . . . . .
65
10.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
11 Sprint 4
66
11.1 Sprint planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
11.1.1 Expected results . . . . . . . . . . . . . . . . . . . . . . .
66
11.1.2 Duration . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
11.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
11.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
11.3.1 Auto-complete . . . . . . . . . . . . . . . . . . . . . . . .
67
11.4 Testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
11.5 Customer feedback . . . . . . . . . . . . . . . . . . . . . . . . . .
68
11.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
12 Testing
69
12.1 Functionality test results . . . . . . . . . . . . . . . . . . . . . . .
69
12.2 Usability test results . . . . . . . . . . . . . . . . . . . . . . . . .
94
12.2.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
13 Discussion and evaluation
98
13.1 Group dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . .
98
13.1.1 Week 8 internal group evaluation . . . . . . . . . . . . . .
98
13.2 People involved and the course . . . . . . . . . . . . . . . . . . . 101
13.2.1 Customer . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
13.2.2 Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
13.2.3 Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
13.2.4 The course . . . . . . . . . . . . . . . . . . . . . . . . . . 102
13.3 The application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
13.3.1 Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . 103
13.3.2 Final version and deployment . . . . . . . . . . . . . . . . 104
13.3.3 Customer response . . . . . . . . . . . . . . . . . . . . . . 104
13.4 Development process, methodology and work flow . . . . . . . . . 104
13.4.1 Development methodology—Scrum . . . . . . . . . . . . . 104
13.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . 105
13.4.3 Time estimation . . . . . . . . . . . . . . . . . . . . . . . 106
13.4.4 Risk evaluation . . . . . . . . . . . . . . . . . . . . . . . . 107
13.4.5 Seminars and study process . . . . . . . . . . . . . . . . . 108
14 Conclusion and Further Work
110
14.1 Developers guide . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
14.1.1 Further work . . . . . . . . . . . . . . . . . . . . . . . . . 110
14.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A Project directive and templates
114
A.1 Contact information . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.1.1 Customer . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.1.2 Supervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.1.3 Team members . . . . . . . . . . . . . . . . . . . . . . . . 114
A.2 Meeting agendas . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.3 Meeting minutes . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.4 Weekly status report . . . . . . . . . . . . . . . . . . . . . . . . . 117
B User guide
118
B.1 Installation guide . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
B.2 How-to . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.2.1 Creating an observation . . . . . . . . . . . . . . . . . . . 119
B.2.2 Gather location from GPS . . . . . . . . . . . . . . . . . . 120
B.2.3 Add additional information . . . . . . . . . . . . . . . . . 121
B.2.4 Add pictures of a species . . . . . . . . . . . . . . . . . . 122
B.2.5 Adding additional species to an observation . . . . . . . . 123
B.2.6 Storing an observation . . . . . . . . . . . . . . . . . . . . 123
B.2.7 Editing a stored observation . . . . . . . . . . . . . . . . . 124
B.2.8 Exporting an observation . . . . . . . . . . . . . . . . . . 124
B.2.9 Deleting an observation . . . . . . . . . . . . . . . . . . . 124
C Glossary
125
List of Figures
1
Gantt chart of project phases . . . . . . . . . . . . . . . . . . . .
7
2
Gantt chart of planning and research phase . . . . . . . . . . . .
8
3
Projet Noah app . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
4
US Bird Checklist app . . . . . . . . . . . . . . . . . . . . . . . .
19
5
Audubon species guide app . . . . . . . . . . . . . . . . . . . . .
19
6
Scrum methodology[37] . . . . . . . . . . . . . . . . . . . . . . .
20
7
The unmodified ”waterfall model” methodology[54] . . . . . . . .
22
8
Technology acceptance model framework[12] . . . . . . . . . . . .
30
9
Counting number of different species . . . . . . . . . . . . . . . .
32
10
Observation can include rare and endangered species . . . . . . .
33
11
Create New Observation . . . . . . . . . . . . . . . . . . . . . . .
41
12
Add More Information to Species . . . . . . . . . . . . . . . . . .
42
13
Add another species to observation . . . . . . . . . . . . . . . . .
43
14
Export observations . . . . . . . . . . . . . . . . . . . . . . . . .
44
15
Take picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
16
View observations . . . . . . . . . . . . . . . . . . . . . . . . . .
46
17
Edit observations . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
18
Update Database . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
19
Overall system architecture . . . . . . . . . . . . . . . . . . . . .
49
20
Similarity of the application (left) and the webpage (right). . . .
52
21
Main Screen, Right:Button Clicked
. . . . . . . . . . . . . . . .
53
22
New Observation, Right:Button Clicked . . . . . . . . . . . . . .
54
23
Bird Observation, Right:Auto-Complete field . . . . . . . . . . .
55
24
Add More Information, Right:New window after select
. . . . .
56
25
Add Another Species, Right:Window with the new species line .
57
26
Diagram showing the database tables and their relations . . . . .
62
27
Mobile application Experience. . . . . . . . . . . . . . . . . . . .
94
28
I found the App difficult to install . . . . . . . . . . . . . . . . .
95
29
The App is easy to set up (left), Majority neutral for the App
has met my expectation (right) . . . . . . . . . . . . . . . . . . .
95
30
The App has user-friendly interface . . . . . . . . . . . . . . . . .
95
31
Majority result showing positive result for productivity and intention to use . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
32
Least favourite functionality . . . . . . . . . . . . . . . . . . . . .
96
33
Majority disagree on the application being flawless and running
smoothly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
34
The application was found to be not confusing . . . . . . . . . .
97
35
Week 8: Question 1 results . . . . . . . . . . . . . . . . . . . . .
98
36
Week 8: Question 2 results . . . . . . . . . . . . . . . . . . . . .
98
37
Week 8: Question 3 results . . . . . . . . . . . . . . . . . . . . .
99
38
Week 8: Question 4 results . . . . . . . . . . . . . . . . . . . . .
99
39
Week 8: Question 5 results . . . . . . . . . . . . . . . . . . . . .
99
40
Week 8: Question 6 results . . . . . . . . . . . . . . . . . . . . . 100
41
Week 8: Question 7 results . . . . . . . . . . . . . . . . . . . . . 100
42
Week 8: Question 8 results . . . . . . . . . . . . . . . . . . . . . 100
43
Week 8: Question 9 results . . . . . . . . . . . . . . . . . . . . . 100
44
Week 8: Question 10 results . . . . . . . . . . . . . . . . . . . . . 101
45
Weekly time consumption . . . . . . . . . . . . . . . . . . . . . . 107
46
Meeting agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
47
Meeting minutes . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
48
Weekly status report . . . . . . . . . . . . . . . . . . . . . . . . . 117
49
The program icon after installation . . . . . . . . . . . . . . . . . 118
50
Front page (left). Select Species (right)
51
Observation window (Auto Complete) . . . . . . . . . . . . . . . 120
52
Gather location from GPS . . . . . . . . . . . . . . . . . . . . . . 120
53
Observation page (left). Details page (right)
54
Add pictures of a species . . . . . . . . . . . . . . . . . . . . . . . 122
55
Adding additional species to an observatio . . . . . . . . . . . . . 123
56
Front page (left). Stored Observations (right)
. . . . . . . . . . . . . . 119
. . . . . . . . . . . 121
. . . . . . . . . . 124
List of Tables
1
Task effort estimation for each sprint . . . . . . . . . . . . . . . .
8
2
Risk likelihood measurement parameters . . . . . . . . . . . . . .
14
3
Risk impact measurement parameters . . . . . . . . . . . . . . .
14
4
Risk analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
5
Use case scenario usability testing[22]
. . . . . . . . . . . . . . .
29
6
TAM questionnaire items . . . . . . . . . . . . . . . . . . . . . .
31
7
Functional requirements . . . . . . . . . . . . . . . . . . . . . . .
37
8
Excerpt from unit testing . . . . . . . . . . . . . . . . . . . . . .
62
9
Summary of test 1 (run 1) . . . . . . . . . . . . . . . . . . . . . .
69
10
Execution of test 1 (run 1) . . . . . . . . . . . . . . . . . . . . . .
70
11
Summary of test 2 (run 1) . . . . . . . . . . . . . . . . . . . . . .
70
12
Execution of test 2 (run 1) . . . . . . . . . . . . . . . . . . . . . .
71
13
Summary of test 3 (run 1) . . . . . . . . . . . . . . . . . . . . . .
71
14
Execution of test 3 (run 1) . . . . . . . . . . . . . . . . . . . . . .
72
15
Summary of test 4 (run 1) . . . . . . . . . . . . . . . . . . . . . .
72
16
Execution of test 4 (run 1) . . . . . . . . . . . . . . . . . . . . . .
72
17
Summary of test 5 (run 1) . . . . . . . . . . . . . . . . . . . . . .
73
18
Execution of test 5 (run 1) . . . . . . . . . . . . . . . . . . . . . .
73
19
Summary of test 6 (run 1) . . . . . . . . . . . . . . . . . . . . . .
74
20
Execution of test 6 (run 1) . . . . . . . . . . . . . . . . . . . . . .
74
21
Summary of test 7 (run 1) . . . . . . . . . . . . . . . . . . . . . .
75
22
Execution of test 7 (run 1) . . . . . . . . . . . . . . . . . . . . . .
75
23
Summary of test 8 (run 1) . . . . . . . . . . . . . . . . . . . . . .
76
24
Execution of test 8 (run 1) . . . . . . . . . . . . . . . . . . . . . .
76
25
Summary of test 1 (run 2) . . . . . . . . . . . . . . . . . . . . . .
77
26
Execution of test 1 (run 2) . . . . . . . . . . . . . . . . . . . . . .
77
27
Summary of test 2 (run 2) . . . . . . . . . . . . . . . . . . . . . .
78
28
Execution of test 2 (run 2) . . . . . . . . . . . . . . . . . . . . . .
78
29
Summary of test 3 (run 2) . . . . . . . . . . . . . . . . . . . . . .
79
30
Execution of test 3 (run 2) . . . . . . . . . . . . . . . . . . . . . .
79
31
Summary of test 4 (run 2) . . . . . . . . . . . . . . . . . . . . . .
80
32
Execution of test 4 (run 2) . . . . . . . . . . . . . . . . . . . . . .
80
33
Summary of test 5 (run 2) . . . . . . . . . . . . . . . . . . . . . .
81
34
Execution of test 5 (run 2) . . . . . . . . . . . . . . . . . . . . . .
81
35
Summary of test 6 (run 2) . . . . . . . . . . . . . . . . . . . . . .
82
36
Execution of test 6 (run 2) . . . . . . . . . . . . . . . . . . . . . .
82
37
Summary of test 7 (run 2) . . . . . . . . . . . . . . . . . . . . . .
83
38
Execution of test 7 (run 2) . . . . . . . . . . . . . . . . . . . . . .
83
39
Summary of test 8 (run 2) . . . . . . . . . . . . . . . . . . . . . .
84
40
Execution of test 8 (run 2) . . . . . . . . . . . . . . . . . . . . . .
84
41
Summary of test 1 (run 3) . . . . . . . . . . . . . . . . . . . . . .
85
42
Execution of test 1 (run 3) . . . . . . . . . . . . . . . . . . . . . .
85
43
Summary of test 2 (run 3) . . . . . . . . . . . . . . . . . . . . . .
86
44
Execution of test 2 (run 3) . . . . . . . . . . . . . . . . . . . . . .
86
45
Summary of test 3 (run 3) . . . . . . . . . . . . . . . . . . . . . .
87
46
Execution of test 3 (run 3) . . . . . . . . . . . . . . . . . . . . . .
87
47
Summary of test 4 (run 3) . . . . . . . . . . . . . . . . . . . . . .
88
48
Execution of test 4 (run 3) . . . . . . . . . . . . . . . . . . . . . .
88
49
Summary of test 5 (run 3) . . . . . . . . . . . . . . . . . . . . . .
89
50
Execution of test 5 (run 3) . . . . . . . . . . . . . . . . . . . . . .
89
51
Summary of test 6 (run 3) . . . . . . . . . . . . . . . . . . . . . .
90
52
Execution of test 6 (run 3) . . . . . . . . . . . . . . . . . . . . . .
90
53
Summary of test 7 (run 3) . . . . . . . . . . . . . . . . . . . . . .
91
54
Execution of test 7 (run 3) . . . . . . . . . . . . . . . . . . . . . .
91
55
Summary of test 8 (run 3) . . . . . . . . . . . . . . . . . . . . . .
92
56
Execution of test 8 (run 3) . . . . . . . . . . . . . . . . . . . . . .
92
57
Summary of test 5 (run 1) . . . . . . . . . . . . . . . . . . . . . .
93
58
Execution of test 5 (run 1) . . . . . . . . . . . . . . . . . . . . . .
93
1
Abstract
Motivation, problem statement, approach, results and conclusions
Motivation Artsdatabanken, an organization under the Norwegian Department of Education has long needed a mobile application for species observation.
Observers in the field still use a paper notebook to do observations, and are
requesting a simpler and more effective means to log and register their findings
with Artsdatabanken.
Problem statement Our task was to develop a cross-platform mobile application that could simplify registration of species, with automatic gathering of
information such as GPS and pictures, and provide a means to easily register
this with Artsdatabankens systems. The system should replace the old paper
notebook, work on mobile devices such as the iPhone, iPad and Android devices,
and reduce the time needed to register your observations online.
In addition to this, we were tasked with performing research into different crosscompiling frameworks, and draw a conclusion about which framework worked
the best. We would also evaluate this framework throughout our process to
see if using such a framework was viable as a means to develop cross-platform
applications.
Approach Our approach consisted of performing a thorough study of frameworks and the method of species observations. Amongst other things, we performed a field study at Artsdatabanken about species observation, and we held
workshops discussing the features of the application.
The application was developed using the PhoneGap framework, and we focused
primarily on the Android platform. We focused a lot on testing, and used
Test-Driven Development, in addition to performing a usability test of the application.
The group also established industry-proven quality assurance, and used an agile
methodology based on SCRUM for the project.
Results The final application should work on all mobile devices, but remain
untested for other platforms than Android because of the lack of hardware for
testing. The application is programmed using PhoneGap, in combination with
jQuery Mobile as the interface framework.
Our usability testing revealed that our app has some good points, but need
some improvements before it can be put into production. It seems to have
a user friendly interface, but the installation process is not satisfactory. On
average, testers are positive to use the app and think that it can help them be
more effective in species observations. The majority disagrees in that the app is
smooth and flawless. These results are based on a test run on only three users,
therefore the conclusions should be taken with a grain of salt.
1
Conclusion The project has given us very valuable experience in group dynamics and project management. We also learned a lot about customer relations
and documentation that will help us a lot in our careers going forward. Most
importantly we learned to deal with different cultures and experiences, and work
together as a group towards a common goal.
We found out that cross-platform development can be both a blessing and a
curse. While the development time is quite fast, the application suffers from
many faults such as bad user experience due to breaking conventions, sluggishness and lack of support for native functions in certain OSes.
2
2
Introduction
The purpose of this report is to document everything we do in the project. The
report will include description of the group dynamics, project requirements,
project management and a lot more. In this section we will briefly talk about
the project, the directive, our proposed solution and the people involved.
2.1
Project description
The project is part of the course TDT4290 Customer Driven Project. This
course is mandatory for all computer science majors, and its goal is to give the
students experience with customer relations, project management and group
dynamics in a real project.
The project itself is about developing a cross-compiling application for mobile
platforms. We are to deliver a study of different frameworks, the project process,
and an evaluation of the process and framework used. The application will aid
observers in the field, and seeks to replace the old notebook-way of gathering
species observations in the field. The project is sponsored by Artsdatabanken.
In this section, we will look at the project, the people involved, and the limitations of the project.
2.1.1
The customer
Our customer is Artsdatabanken. Artsdatabanken is a company in the Norwegian Biodiversity Information Center (NBIC) body that provides the public
with information on Norwegian species and ecosystems. Artsdatabanken claims
to have approximately 6 million species of plants, insects, birds, large predators. Of these 6 million, 85% are birds. Artsdatabanken keeps a series of
database resources such as Red List, Alien Species, Species name and Habitat databases together with Species Map and Species Observations. Artsdatabanken depends on these projects and databases to fully conduct its operations
and responsibilities.[5]
2.1.2
People involved in the project
The student group consists of seven fourth year students partaking the Computer Science program at the Norwegian University of Science and Technology.
Two of these students are enrolled in the International student exhange program
for Computer Science. These are in alphabetical order: Anders Søbstad Rye,
Andreas Berg Skomedal, Dag-Inge Aas, Muhsin Günaydin, Nikola Djoric, Stian
Liknes and Yonathan Redda. The student group is advised by PhD Candidate,
Muhammad Asif, at IDI.
The customers representatives are Askild Olsen and Helge Sandmark. Askild
Olsen will function as the product owner, and has the final word when decisions
are made about the project. They are employees of Artsdatabanken, where they
are responsible for development and maintenance of existing systems.
3
2.1.3
Project drivers
Artsdatabanken has a very skilled user base. Some of the users have for a long
time requested a mobile application for gathering observation data in the field.
This application would replace the old notebook method of gathering information, automate collection of some data such as GPS coordinates, and decrease
the complexity involved in making observations of species, so that beginners
would have an easier time learning the process. In addition, Artsdatabanken
wants the application to help increase the number of observations by increasing
the number of users and significantly lessen the complexity involved in registering observations.
2.1.4
Problem domain
In the current situation, an observer has to make the observation, write down the
number of individuals in each species and alternatively take a picture. After
this is done, the observer must go home, post everything to an online form
and alternatively upload pictures separately. Because of the complexity and
the lack of automation involved in this process, the smaller observations are
often overlooked, and doesn’t get registered. Artsdatabanken wants as many
observations done as possible.
In addition, considering that Artsdatabanken is a public institution brings additional demands to the project. Artsdatabanken needs to support as many
mobile devices as possible. This means that the application must work on a
variety of devices with different operating systems and screen sizes.
2.1.5
Proposed solution
Our proposed solution involves developing a mobile application that aids the
observer in the field when he or she is making a species observation. The application will work on a number of different platforms, including iPhone, Android
and Symbian. This involves using a cross-compiling mobile application framework, named Phonegap, that can deploy the same code on multiple platforms.
This will enable us to use the same codebase, but deploy on six different platforms. The application itself will automate a lot of the data collection involved
with making observations, and exports the data to a computer-readable format
suitable for parsing by Artsdatabankens systems. This solution will lower the
barrier for making and registering observations, enabling more users to partake.
2.1.6
Project objective
The student group are expected to deliver a common mobile application codebase that can be deployed on many different platforms in addition to a working
application demo that can export observations ready for parsing by the server.
The server-side APIs will be provided by Artsdatabanken. In addition, the
student group will deliver all research made into the problem domain to the
customer, including research into cross-platform frameworks and deployment.
4
2.1.7
Available resources
Artsdatabanken will provide user testing of the application. They will also
be giving a short taxonomy training to a few selected students and hold a
field study, giving the students the opportunity to learn about observations and
species.
The students will supply testing hardware for the Android platform, and the
customer is expected to provide an i family device for iOS application testing.
This gives the students the opportunity to test the application on the two most
popular platforms in various screen sizes.
2.1.8
Limitations
The project is scheduled to last from the 30th of August to the 24th of November.
The budgeted working hours per group member is 350 hours, making the total
working hours available to 2450 person hours. This is equal to over one year
worth of person hours. Time could possibly be one of our limitations in the
project.
As the students have no testing hardware for Symbian, Bada, BlackBerry or
Windows Phone, the applications developed for these platforms will not be
tested on a real device prior to launch. The students also lack hardware necessary for efficient testing of the application on Apple’s iOS platform, having
to rely only on sporadic access to the hardware. OS Hardware being the best
testing platform, the lack of them will certainly have its limits on our testing
capability in this project.
2.2
Copyright
In the compendium of this project [20, Section 3.9], it is stated: ”legally and
by default (Berne convention), the copyright (or IPR) belongs to those (the
students) that have their names on the front page of the actual work.”.
It is therefore up to the students themselves to deal with the IPR of the application.
While many different approaches are suggested, the group would like to enforce
their own license. We would like the information and software for this product to
be free of charge, and open source, so that others can learn and expand upon our
work. For this we have chosen the Creative Commons Attribution-ShareAlike
3.0 Unported License[7]. This license gives everybody the right to modify and
redistribute the work, even for commercial use, as long as they share their work
equally under the same or similar license.
5
3
3.1
Project plan and management
Methodology
In this section we will briefly study two different approaches to development.
These are the SCRUM and Waterfall methodologies.
The group opted for a variation of the agile methodology, SCRUM. We chose
this over the waterfall model because waterfall does not adapt to the volatile
nature of requirements as well as SCRUM.. It is expected that the user requirements may change significantly during the development of the application. Both
the advisor and the customer recommended that we use an agile methodology
when we developed our application. You can read more about the choice of
methodology in the ’Preliminary studies’ section.
Meetings We will have daily standups where all group members will be
present and discuss the work completed since last standup, and what to focus on for the next period. In addition to this, we will have a weekly Monday
meeting where we discuss the coming week, and what was done the last week.
Every Tuesday we have a meeting with the advisor from IDI. Here we discuss
what has been done, get feedback on our progress and ask questions about the
project.
Each sprint will be ended by a customer meeting where we present the work
done in this period, accompanied by a demo of the application. We then discuss
what needs to be focused on for the next sprint, and prioritize our product
backlog.
Backlog We have two backlogs. The product backlog consists of all the functional requirements of the application. In addition to this, we have a sprint
backlog where all the requirements that are to be completed during each sprint
are put. These are chosen in cooperation with the customer before each sprint.
Sprints Each sprint will last two weeks, and there will be four sprints in total
during the course of the project. Before each sprint, we plan what needs to be
done and create a sprint backlog of requirements that need to be filled. This
backlog is then prioritized and broken down into smaller work tasks which are in
turn distributed to the group members. To keep track of all these tasks, we use
GitHub Issue tracking, where each group member has several tickets assigned
to them.
3.2
Project phases
Our project will loosely follow the plan in Figure 1.
The first week of this project is planned as an introductory week, where group
will get introduced to each other and the project assignment. After that, the
first official phase of the project will last for two weeks, where we will focus on
planning and project management. During this period all administrative tasks
6
Figure 1: Gantt chart showing project phases, and when they are planned done.
should be finished, allowing a smooth start of the actual development process.
We will also finish our preliminary study of the problem domain in this period.
This should give a clear picture of available technologies, development processes
and project work flow for the upcoming months.
The start of the actual development is scheduled for week 36, overlapping the
previous phase. The development process will consist of a ”sprint zero”, an
introductionary sprint, lasting one week, and four two-week sprints. At the end
of sprint four, which is scheduled to end in week 44, the team will finish the
application development and provide a release version to the customer. The
last three weeks are reserved for finishing the documentation, which will also be
written throughout the whole process. During this period, the remaining sections should be finished, the whole document thoroughly revised and prepared
for the final delivery.
3.2.1
Planning and research
This first phase of the project consists of an introduction to the course, planning of the project, introduction to the problem domain, getting to know the
group, requirement gathering and preliminary study. In this phase the most
important decisions about system architecture, choice of COTS (Commercial
off-the-shelf) and framework, project methodology and role distribution, will be
taken. Figure 2 shows how the work will be distributed in this time period.
3.2.2
Sprints
Each sprint will consist of four phases. Effort estimation is detailed in Table 1.
Planning The planning phase of each sprint represents the time required for
work distribution, planning of how each task should be handled, and how testing
should be done for this section.
7
Figure 2: Gantt diagram picturing how work should be distributed in the time
available during the planning and research phase.
Task
Planning
Implementation
Testing
Documentation
Administrative
Sum
Hours per person
8
8
8
16
8
50
Hours total
56
56
56
112
56
350
Table 1: Task effort estimation for each sprint
Implementation The implementation phase represents time spent on coding.
This will also include code refactoring and other maintenance tasks related to
the code.
Testing The testing phase represents time spent testing the system. This includes integration testing, unit testing, functional testing etc. The testing and
implementation phases will work concurrently due to the test-driven development methodology.
Documentation The documentation phase represents time spent documenting work effort (implementation, research, etc.) and administrative tasks like
meetings.
3.2.3
Documentation
The last part of the project will consist of evaluating our work, and finishing
the documentation for the project. In addition, this part will include a final
8
presentation November 24th for the external examiner, advisor and customer.
This period will span the last three weeks of the project.
3.3
Group organization
The group strives to have a flat organizational structure, where each role has
distinct responsibilities, but no role is more important than any other role. In
this section, we will talk about each role, what the responsibilities are for this
role, and how these roles are distributed between the group members.
3.3.1
Roles
Scrum master The Scrum Master, also known as the Project Manager is
responsible for keeping track of the groups progress, manage workload, contact
with the customer and advisor, and responsible for the weekly status reports. He
is also responsible for booking rooms for meetings, preparing meeting agendas
and managing each meeting.
Technical leader The technical leader is responsible for the version control
system, training of other group members, making scripts for batch jobs, like
creating timetables, and being the technical lead in the technical matters of the
project.
Test leader The Test leader is responsible for creating and making sure tests
are run. The test responsible should create standards for testing, such as unit
tests, make sure the unit tests are up to date, and covers an adequate amount
of functionality. He is also responsible for creating integration tests, usability
tests and any other tests deemed necessary for the completion of the project.
Development leader The development leader is responsible for code standards, code conventions, code quality and the development team. The development responsible will do code reviews and manage the progress of the application
development.
Documentation leader The documentation leader is responsible for the
quality of documentation, and keeping the documentation up-to-date with the
development. In addition, the development responsible is required to have an
overview of the entire document, how it is structured, and maintaining a steady
progress.
Product owner The product owner is not part of the group. He is a representative of the customer responsible for taking decisions regarding the product,
should there be any questions. He has final word in all decisions.
9
3.3.2
Role allocation
Each role allocation is dynamic throughout the entire project. Any member
of the group can change his role if he desires another responsibility during the
project. In addition, some roles have dynamic responsibilities, such that the
technical responsible can do testing, and the testing responsible can do coding.
This is in conformance with the agile methodology, SCRUM.
However, the role of technical responsible and scrum master will remain throughout the project to ease contact with the product owner and advisor at IDI. These
two roles are also the most dynamic, and can do the work of other roles if deemed
necessary.
Each role has been allocated through internal group meetings by majority vote.
Dag-Inge Aas is Scrum Master, Stian Liknes is Technical leader and Test leader.
Andreas Skomedal is development leader and Yonathan Redda is documentation
leader. In addition, Askild Olsen is the product owner.
3.4
3.4.1
Conventions
LaTeX
All documentation regarding the project should be written in LaTeX. We will
follow the standard article format included by LaTeX, printed with a standard
a4paper.
Encoding The encoding of the document will be UTF-8, so that we avoid
encoding errors when we compile the document.
Tables and figures Tables and figures will be printed in landscape format
before the table is broken down into several pages. This keeps the formatting of
the tables consistent and readable for as long as possible. A table should never
exceed the maximum textwidth of the page. The same rule applies for figures.
References All references should contain a short and descriptive name, and
a category, so that all articles from e.g. Wikipedia are referenced wiki:article,
and all references about phonegap are referenced phonegap:page.
3.4.2
JavaScript
JavaScript code should never be embedded in HTML files. Script tags should
be placed as late in the body as possible. [10]
Line length and indentation
Tabs should be used for indentation. Lines should not be longer than 80 characters.
10
Comments and variable names
Comments should be kept at a minimum. They are disruptive to the code
and usually outdated. Focus should be on writing good code instead of writing
comments. Comments can be used to clarify code that is not self-explanatory or
to warn about issues like framework bugs or weak code. All developers should
have on mind that the code should already be documented by written tests,
before implementing the feature.
Variables should have descriptive names. It is better to try to find good names
before resorting to comments.
To indicate that a variable refers to jQuery objects developers sometimes prefix
it with $. This team will not use this convention, it makes it hard to spot actual
jQuery calls.
Declarations
Variables should always be declared, this will prevent conflicts with globals.
Functions should be declared as follows:
f u n c t i o n somename ( . . . ) {
...
}
var somename = f u n c t i o n ( . . . ) {
...
}
Space should never be placed between the function name and the first parenthesis.
Statements
If statements should have the following form (notice that the else statement is
not on the same line as the closing curly brace):
i f ( condition ) {
statements
}
else {
statements
}
A switch statement should have the following form:
switch ( expression ) {
case condition :
statements
break ;
. . . more c a s e . . .
default :
statements
}
11
3.5
Quality assurance
This section contains routines that should be followed to ensure the quality of
the product and its report. In a bid to incorporate quality ensuring procedures,
the group will follow the development tasks in line with internationally accepted
standards recommendations documents such as ISO9126.
ISO9126 ISO9126 is a document from the international organization for standardization that recommends software qualities be checked against six core characteristics, which are described in short below and will be used as reference point
for quality assurance reminders in this project[13].
• Functionality - make sure the functionalities are those functionalities that
satisfy the stated or implied needs.
• Reliability - make sure the software is capable of maintaining its level of
performance under a specified condition in a specified period of time.
• Usability - the level of effort to use the system and assessment by a stated
set of users.
• Efficiency - The level of performance versus the amount of resource applied
under a specified condition.
• Maintainability - The level of effort required to make a specified modifications.
• Portability - The ability of the software to be able to be deployed or be
transferred from one environment to another.
The following is how we chose to implement this standard.
3.5.1
Templates
All meeting agendas, meeting minutes and status reports will follow the templates as shown in Appendix A. These are shared by all group members on
Google Docs.
3.5.2
Group dynamics
All communication will be done in English. Monday through Thursday every
week there will be scrum stand-up at 09.15 at Drivhuset. Documentation written
should be proofread by at least one of the other group members to assure good
grammar and avoid typographical errors. Every meeting starts with a meeting
agenda and notes for the meeting minutes will be written for all meetings and
distributed to the entire group by email in PDF-form.
All email correspondence within or to external sources will be sent as a CC
to [email protected]. This will send a copy to all members of the
group. In addition to this, all documents shared on google docs will also be
shared with this group.
12
Meeting agendas will be distributed amongst the group 24 hours before the
meeting at the latest. The document will be sent by email in PDF-form.
3.5.3
Customer relations and meetings
There will be a demo for the customer on every customer meeting to let them
know how far along the project is and to catch misunderstandings or direction
changes early on. The customer will receive the meeting minutes after every
meeting and approve them and if there is no response within 48 hours they are
regarded as approved.
3.5.4
Advisor relations and meetings
There will be a meeting with the Advisor every week, any information that the
advisor should comment on should be submitted to him before 1400 the day prior
to the meeting. On a regular basis this includes: project status, project plan,
summary of last weeks meeting minutes and an hour sheet for group members,
this should preferably be included in one file. The group number should always
be included when contacting the advisor. All correspondence will be done in
English.
3.6
Risk Management Framework
Risk can come in several forms such as a project violating budget or schedule,
losing track of goal, missing the essence of the task on hand, etc. In our analysis,
we will consider project management issues, technical difficulties, shortage of
man power, unplanned events and application security[55]. Analysis of our
risks were based on a brainstorming session we held to indicate possible things
that might go wrong in our project. We then tried to quantify our risks using
standard risk quantification standard.
There are four key tasks of risk management planning[44]
1. Identify risks
2. Quantify risks
3. Develop counter-measures
4. Regularly review risk analysis
Risks will be quantified by likelihood and impact, this can later be used for
prioritizing. We use definitions listed in table 2 and 3 in the quantification step.
13
Title
Very Low
Low
Medium
High
Very High
Description
Highly unlikely to occur; however, still needs to be monitored
as certain circumstances could result in this risk becoming more
likely to occur during the project.
Based on current information unlikely to occur, and the circumstances likely to trigger the risk are also unlikely to occur.
Likely to occur. Some indicators that the risk might probably
materialize.
Current circumstances show that the the risk is very likely to
occur.
Some events indicate that the risk is highly likely to occur and
measures might not stop it from materializing.
Table 2: Risk likelihood measurement parameters
Title
Very Low
Low
Medium
High
Very High
Description
Insignificant impact on the project. Minor discomfort.
Minor impact on the project, less significant milestones not met.
Project could be delayed, a lot of work to meet deadlines but
manageable with a mitigation plan.
A major problem, significant revision in plan or product. A serious
of delays in deliverables.
Over budget and over schedule. Total collapse of the project.
Table 3: Risk impact measurement parameters
14
3.7
Id
Risk analysis
Project Risk
Participance in voluntary
associations
Risk indicators
Members spend less time
on the project. Meeting
agendas might me late.
Likelihood
L
Effect
Work delay and inconvenience.
Impact
M
2
Customer changing their
requirements
If the customer have a lot
of requirements, but can’t
be decisive
M
Slow progress, uncertainty
about work requirements
and project state
M
3
Time spent on work unrelated to this project
One of the group members
is absent for a longer period of time (more than 2
days)
Being absent in a group
work too often
Increasing workload and
backlogs.
M
Less time for project
M
H
Work delay and inconvenience.
L
1
15
4
Mitigation
The group leader should
distribute work accordingly. Members will notify
24 hours beforehand.
Pinpoint
requirements
early,
agree upon a
written
requirements
specification. Getting one
clear product owner we
can ask for directions.
Communication
with
customer. Ask the advisor for help. Customer
approves the requirements
we write, and the meeting
minutes.
Prioritize this project
when possible
Distribute work accordingly. Extend period until
delivery if possible. Group
members can communicate via email. Absent
group member can do simple tasks.
16
Id
5
Project Risk
One of the group members may be overloaded
with work from project
and other classes
Risk indicators
The person is stressed,
might be sick for a longer
period of time, less quality
of work, inconsistent work
pattern
Likelihood
L
Effect
Higher workload for other
group members. Might
lead to the same risk.
Impact
L
6
Underestimated workload
or complexity task.
Slow progress. Quality issues.
L
Incomplete
product,
Customer dissatisfaction.
Bad grades. A big workload at the end of the
project.
H
7
Missing project deliverable
High workload,
slow
progress.
Repeated
failure to deliver the
deliverables on time.
L
Bad grade and customer
dissatisfaction
H
8
Customer doesn’t provide
a Mac for testing iOS application.
The customer declines formally to provide testing
devices.
H
The finished product will
not be tested on iOS,
and we cannot deploy our
application to iOS App
store.
M
Mitigation
Distribute work to other
members while this member is overloaded, if this is
not possible, minimize requirements with the customer.
Research into problem domain and coding frameworks. Figure out group
member skills and plan accordingly. Create a skill
matrix.
Re-plan the project. Make
the deliverables realistic
to complete.
Realistic
deliverable goals.
Discuss with the customer
about removing some deliverables.
Tell the customer we cannot provide the application for the iOS platform
due to lack of hardware.
Get to an agreement with
the customer over this.
Find a Mac from a different source than the client.
Id
9
Project Risk
Export feature not working correctly or lacking in
finished product.
Failure of usability test
Risk indicators
Complex code, bad communication with database,
failures during testing.
Lack of appropriate number of participants
Likelihood
M
Effect
Customer dissatisfaction,
usability problem
Impact
H
M
Time spent planning for
usability test
L
One of the group members
experience hardware failure or loss of data
Repeated failure to deliver
assigned task.
L
Losing important data,
goals not being made,
higher workload for the
rest of the group
M
10
11
17
Table 4: Risk analysis
Mitigation
Prioritize the export feature early, start work on it
as soon as possible.
Let the customer handle
usability test according to
our method
Commit to github often,
or use Google Docs as a
backup. Can use NTNUs
computers or backup computers provided by the
project group.
4
4.1
Preliminary study
Similar products
Before we started exploring the options and benefits of development, we looked
at similar products for inspiration and on the hope of finding functionality
possibilities that can easily be integrated into our product—functionalities nice
to have but may not necessarily have been specified by a customer. Besides, the
team thought exploring existing software solutions could help us learn something
which is going to be new and some which were already tried. General opinions
are that commercial-off-the-shelf software are thought to be straight forward,
time and cost saving even thought it might bring its own version of problems[25].
Having said that, we summarized the apps we investigated and described some
of them those that were closer to the system we are trying to develop.
Project Noah - Networked Organisms and Habitats Project Noah is a
mobile application that helps nature lovers discover local wildlife and aspiring
citizen scientists contribute to current research projects. Noah stands for networked organisms and habitats. It is a tool people can use to document and learn
about their natural surroundings and as a technology platform research groups
can use to harness the power of citizen scientists everywhere. And it documents
species sighting with date, category, habitat, picture and comments[30].
Figure 3: Projet Noah app
US Birding Checklist This is a bird watching tool set which records species
sex, age location and pictures[1].
It uses an online database system called eBirds[29], which is launched and run
by Cornell Lab of Ornithology and National Audubon Society. It also show the
distribution of birds on a map.
18
Figure 4: US Bird Checklist app
Audubon Guide Audubon is a portable dictionary like content provision on
mobile phones. It downloads a wealth of information about mammals, birds,
butterflies and more to a mobile phone and its focus is for viewing and providing information about species[28]. Audubon has a variety of version such as
Audubon Wildflowers, Audubon Butterflies, Audubon Mammals.
Figure 5: Audubon species guide app
4.1.1
Conclusions
But we found none of the application fit for the requirements from Artsdatabanken. Artsdatabanken runs its own data source and whatever COTS the
team considers should enable Artsdatabaken to access its data source. Artsdatabanken requirements recommend a strict implementation of requirements
such as location registry, pictures, number of species, and species activities
19
which are handled in different ways in the applications described above. The
applications are also not free and cannot be customized to our customer’s needs,
a lot more reason to resort to development. It is at this point we decided that
development is the only option to fulfill our customers’s requirements.
4.2
4.2.1
Development process
SCRUM model
The most used model for iterative, incremental development of projects today
is SCRUM[11]. It is an agile software development strategy which uses iterative
development as a basis, but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than
planning as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software[52].
The Scrum approach was originally suggested for managing product development projects, but its use can also be focused on the management of software
development projects.
Scrum is a process skeleton that contains sets of practices and predefined roles.
The main roles in Scrum are:
• the ScrumMaster, who maintains the processes (typically in lieu of a
project manager)
• the Product Owner, who represents the stakeholders and the business
• the Team, a cross-functional group who do the actual analysis, design,
implementation, testing, etc.
Figure 6: Scrum methodology[37]
During each sprint, typically a two to four week period (with the length decided by the team), the team creates a potentially deliverable product in each
increment. Features that go into a sprint come from the product backlog, which
represents a prioritized set of high level requirements of work that should be
20
done. Which backlog items are going into the sprint is determined during the
sprint planning meeting. Product Owner is responsible to inform the team of
the items in the product backlog that should be completed. The team then
determines how much of the requirements they can complete during the next
sprint, and records this in the sprint backlog. When sprint starts, no one is allowed to change the sprint backlog, which means that the requirements can not
be modified for that sprint. Each sprint must end on time, and development is
timeboxed. If any of the requirements are not completed for any reason, they are
left out and returned to the product backlog. When each sprint is completed,
the team demonstrates what have been done, and how to use the software.
Scrum is very convenient because it enables the creation of self-organizing teams
by encouraging verbal communication between all team members situated at one
location.
Most important principle of Scrum is its that during a project the customer can
change their mind about what they want and need, and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner.
As such, Scrum adopts an empirical approach, accepting that the problem cannot be fully understood or defined at the start, focusing instead on maximizing
the team’s ability to deliver quickly and respond to new requirements.[52]
4.2.2
Waterfall model
The waterfall model is a sequential design process in which progress is seen
as flowing steadily downwards through the predefined phases, like a waterfall.
Phases are Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation and Maintenance, and each this phases is executed sequentially. This model is one of the most used software development process
this days, beside SCRUM.
Original waterfall model is called Royce’s model, and it defines following phases:
• Requirements specification
• Design
• Construction (i.e. implementation or coding) testing, etc.
• Integration
• Testing and debugging (i.e. Validation)
• Installation
• Maintenance
Specific about waterfall model is that each phase is started after previous phase
is finished, and there is no overlapping between 2 phases. This applies for most
strict model, but there are also various modified models that may include slight
or major variations upon this process.
21
Figure 7: The unmodified ”waterfall model” methodology[54]
4.2.3
Conclusions
For this project, the team decided to use the SCRUM model. We chose this
method because of the highly volatile nature of requirements. The project description and domain is very loosely defined, and we expect the requirements to
change significantly during the development of the application. But as SCRUM
requires strict and precise execution of the predefined model, the actual development process will be slightly modified. The project will have 4 sprints, including
an additional ’zero’ sprint which contained the initial prestudy and GUI sample.
Each sprint will span 2 weeks, or 10 working days. Some additional time will
be left after completion of the sprints for revision, fixing and more testing, to
improve the product’s reliability and quality if necessary.
4.3
Mobile technologies
The mobile technology field has seen an explosive growth in terms of end user
adoption, market penetration, and an ever changing introduction of devices,
hardware terminals, platforms, services and portability. Mobile technology has
made the current form of computing, where a worker sits in front of a stationary device, less and less appealing. Currently, mobile technology and services
providers are in a fierce war for all sorts of reasons, from patent to platform to
hardware dominance. The three most iconic feature of a mobile technology are
communication(being able to connect wireless), mobility(being able to change
from network to network seamlessly), and portability(being able to take the
device wherever the work process requires it)[15][46].
22
4.3.1
Mobile platform
Android Android is an operating system developed for a variety of mobile
devices. It is based on the linux kernel, and provides a developer-friendly framework for app development (subset of Java, Dalvik virtual machine). Most of it’s
code is released under the Apache Licence, a free software licence.
Android was listed as the best-selling smartphone platform worldwide in Q4
2010 by Canalys. [49]
Pros
• Can be developed using any major operating system (linux, windows, ...)
• Android devices can be emulated (AVD). This allows for testing on different screen sizes, etc.
• Developer-friendly
• Open source
Cons
• Apps developed for Android will not run on iOS or Windows phones
iOS iOS is Apple’s mobile operating system, originally developed for the
iPhone. Apple does not license iOS for third-party hardware.
Pros
• Uniform screen size over most iPhone devices
• Design guidelines
Cons
• Closed and proprietary
• Apple hardware and software is required to develop iOS apps
• Requires a yearly subscription to distribute apps developed for iOS
• GPL and other free software licenses can conflict with Apple’s terms
• Design guidelines can create artificial limitations
• Apps developed for iOS will only run on Apple products
23
4.3.2
Cross-compiling frameworks
Phonegap PhoneGap is an HTML5 app platform that allows you to author
native applications with web technologies and get access to APIs and app stores.
PhoneGap leverages web technologies such as HTML and JavaScript. PhoneGap
is the only app platform available today that can publish to 6 platforms. [35]
• Applications can be developed for Apple’s iOS, Google’s Android, Microsoft’s Windows Mobile, Nokia’s Symbian OS, RIM’s BlackBerry and
Bada.
• Enables developers to take advantage of JavaScript, HTML5 and CSS3,
which they might have already been familiar with.
• Access native features such as compass, camera, network, media, notifications, sound, vibrate and storage etc.
• Can use existing CSS and Javascript libraries directly in your code
• Seems like a native application when in reality it’s an offline web application
• Provides a build tool for automatically building binary application packages for six different platforms
• Provided under the (new) BSD license or alternativly the MIT licence,
the framework is entierly Open Source and free for Open Source projects.
• Provides a well-written API, geared towards web developers
Corona Corona enables to develop graphically rich multimedia applications
based on the programming language Lua[9]. Corona supports both the iOS and
Android platforms. Corona focuses on applications with a lot of graphical animations. Corona is neither open source nor free, and also currently a developer
has to pay $99 yearly to maintain membership and be able to build applications
for App Store.
Titanium Titanium is a cross-platform mobile applications development framework that uses web development technologies to develop applications for the
Android and iOS mobile platforms. Titanium is an open source framework that
uses JavaScript and JSON as application language; it can also use Python, Ruby
and PHP scripts[2]
4.3.3
Native
Building applications natively is the best option if we look at each device separately. This ensures that your application always works, and you can utilize
all the tools provided to you in the native API. With native applications you
can also build applications more in conformance with design guidelines for your
platform, with native elements.
24
• Must develop for each individual platform. This means different languages
and APIs for every platform.
• Can utilize design guidelines for individual platforms, making apps similar
to the user, increasing usability.
• Can access more functions in the API, creating cutting edge applications
with the newest APIs
• Native applications run faster and better on the phone
4.3.4
Conclusions
After a discussion with the customer the group decided upon using PhoneGap
as the development platform for our app. The app will primarily be focused
on the iOS and Android platform, but can be deployed to other platforms as
well.
4.4
Mobile development
It is only natural that developers have a taste for different languages or learn
a new one as a project in software development requires it. Most of the time,
scripting languages are used to develop web applicaitons. Mobile applications
can also be developed using one or a combination of C, C++, Java, or a whole
raft of scripting languages such as HTML, CSS or JavaScript.
4.4.1
Native languages
Most mobile platforms come with their own preferred programming language.
iOS is developed in Objective-C and Android in Java. This requires developers
to learn a new language when creating applications for new mobile platforms.
4.4.2
HTML5, CSS3 and JavaScript
An alternative way to app development is making the app live inside the browser
of the phone. These webapps are developed using existing web standards used
by developers all over the world. Recently, phone browsers have become more
complex, enabling the developers to code applications more powerful than ever
before. However, webapps are much slower than running native applications,
but can be deployed to any phone with a sufficiently complex browser[35].
4.4.3
jQuery Mobile
This is a cross-platform and cross-device framework with which one can write
a mobile application capable of running on anyone of the popular mobile platforms. jQuery Mobile is fit for developing applications with touch input, requiring less processing power and makes this possible through a lightweight code
built with progressive enhancement and flexibility in mind[23].
25
In addition, our customer has previous experience using this framework, and
has provided us with a custom built CSS design for Artsdatabanken. This will
enable us to easily conform to design guidelines set by Artsdatabanken.
4.4.4
Conclusions
While programming native applications creates a faster, more user friendly experience, the domain knowledge necessary to test and deliver these applications
would take a lot of effort in our group, which would mean we would spend
more time learning new programming languages than fulfilling the requirement
specification. However, everyone in the group has little to a lot of knowledge
making applications using HTML5, CSS3 and javascript. We chose a development framework based on this previous experience.
Additionally, the customer’s requirement was to build an application that is
capable of running on multiple platforms. PhoneGap currently supports up
to 6 platforms in addition to being open source and using a well known web
technology to author the applications and later compile them into native applications. PhoneGap has also been getting recognitions by the likes of established
technology companies such as Adobe systems. Adobe systems has officially integrated PhoneGap in its 5.5 version of Creative suite. While Titanium is a
cross platform framework, it only supports Android and iOS. Corona supports
only Android and iOS, it is not free and its focus is heavy multimedia content
and graphics applications.
4.5
Testing
This document will focus on agile development and including test driven development (i.e. testing early in the development process) plus a usability test on
a prototype, to make sense of how the prototype will go down with Artsdatabanken’s end users. The scope of software testing includes examination of code
as well as execution of that code in various environments and conditions.
The following should be verified during testing: [53]
1. Product meets the agreed upon requirements (validation)
2. Product Works as expected (verification)
3. Product can be implemented with the agreed upon characteristics
Functional testing verifies that the software is logically correct, i.e. it does what
it is supposed to do.
Non-functional testing refers to other aspects of the software that may not be
related to a specific function or user action, such as scalability, performance,
security, usability, etc. We aim to develop a product that non-technical users
(i.e. bird watchers, etc.) can understand and use, therefore usability tests will
be conducted in line with available logistics support.
26
To achieve an acceptable level of device-operating system compatibility we plan
to test on Android devices of different sizes with different versions of the operating system. Ideally, we would test on all supported devices (i.e. Android, iOs,
Windows mobile, ...).
Unit testing refers to tests that verify the functionality of a specific section of
code, usually at the functional level. There exists many frameworks for this
type of testing, many are a member of the xUnit family, where x is replaced by
a language specific prefix (jUnit for java, qUnit for jQuery and javascript, etc.).
This type of testing is frequently used in test-driven development, and is a good
way to ensure that each part is working as expected. It’s important to test edge
cases, and with invalid and unexpected parameters.
Integration testing is software testing that seeks to verify the interfaces between
components against a software design. This is typically conducted after the
unit tests have passed. Some integration tests can also be included in the xUnit
family.
Regression testing is used to ensure that the code is functional, even after big
changes. In test-driven development this is easy to achieve using automated
testing (xUnit).
Acceptance testing is performed by the customer (often in cooperation with the
developer team), to validate that the business requirements are met.
4.5.1
Testing cycle
In the spirit of agile development, we intend to use automated testing, and the
following test cycle:
1. Requirements analysis
2. Test planning (strategy, plan, testbed, etc.)
3. Test-driven development (see next list)
4. Test reporting
5. Test result analysis
Test driven development (red-green):
1. Write new test
2. Test execution (test should fail, we are in the red zone)
3. Modify code to accommodate new test
4. Test execution (test should succeed, we are in the green zone)
5. Regression testing (after changes in code, refactoring, and similar)
27
4.5.2
QUnit
Qunit is a JavaScript test suite. It’s used by the jQuery project to test its code
and plugins but is capable of testing any generic JavaScript code (including
server-side code). [24]
QUnit is a member of the xUnit family, and provides a toolkit for automated
testing. It is useful for regression testing, and test-driven development in general.
In addition to the traditional xUnit features, QUnit facilitates testing of asynchronous functionality, which is essential if we need to make HTTP requests or
similar.
Trivial example
module ( ” b i r d o b s e r v a t o r ” ) ;
t e s t ( ” s h o u l d be a b l e t o s e t b i r d count ” , f u n c t i o n ( ) {
b i r d . count = 5 ;
e q u a l ( b i r d . count , 5 ) ;
});
4.5.3
jQuery Mockjax: AJAX request mocking
The jQuery Mockjax plugin provides an interface for mocking or simulating ajax
requests and responses[3]. It Can be useful if Artsdatabanken provides us with
a specification of their planned API. This will in principle allow us to verify that
the application is communicating correctly, even though the API hasn’t been
published.
4.5.4
Selenium
Selenium is a tool for automating browsers. Primarily it is for automating web
applications for test purposes [39]. Selenium 1 relies on using JavaScript in
the browser. Following is a subset of the tasks that can be automated using
Selenium:
• Open site (i.e. index.html)
• Click button (or link)
• Assert that element has attribute
• Type text into text field
• Assert that text is present on page
• And so on...
To build test cases, Selenium offers three primary methods. Recording, adding
verifications and asserts with the context menu, or editing. Selenium has been
proven to work with PhoneGap [16].
28
4.5.5
Usability testing
Usability testing is a test in which whether a user achieves the intended functional goal using a system or not, and the level of effort involved to use the
system and achieve the intended functional goal.[13]. The success of a usability
test is dependent on the goal of the usability test. A usability test should have a
specified goal, a carefully prepared questionnaires, appropriate techniques and
tools to be of any use. The most frequently used method of conducting a usability test is based on four notable points[48][27].
• Efficiency - time to complete task
• Effectiveness - task completed ratio
Ef f ectivenessperuser =
errorf reetasks
totaltask
• Learnability - number of errors recorded for novice users
• Memorability - browsing and searching for non-regular users
Table 5: Use case scenario usability testing[22]
Another method of measuring user attitude and perception towards a system
is TAM. TAM, technology acceptance model, is a tool with which users general
perceived of use, perceived ease of use and intention to use is measured. While
usability testing is more focused on task based performance of users[12], TAM
is a general attitude test based on a twelve question model in which 3 things
are measured
• PU - Perceived usefulness, how the user considers the system useful
• PEU - Perceived ease of use, how the user considers the system easy to
use
• ITU - Intention to use, how likely is the user to want to use the system
again
In this project, we will be using a revised version of the usability test we described above, especially focusing on effectiveness and efficiency. Artsdatabanken has an already web experienced user base and the number of novice
29
Figure 8: Technology acceptance model framework[12]
users might probably not be significant to measure memorability and learnability of the system. To measure performance and perception with regard to the
system, the team will use a post questionnaire in the form of the three TAM
categories. Learnability will also be explored using TAM.
30
TAM Perception test using a TAM Post questionnaire[12, 40], each designed
to test the users attitude towards the new application.
Table 6: TAM questionnaire items
The plan is to conduct a formal usability test using one of the fore mentioned
methods but the testing plan will be modified and restructured as our focus, goal
and time resource changes. Usability testing can be extremely difficult to plan
and to make sure the goals are met. Using a pilot test, organizing participants
using standard methodology can be unrealistic for this project. Because of this,
we are going to make decisions that will suit the reality of this project and an
actual documentation of the usability test result will be documented in section
12.
31
4.5.6
Conclusions
QUnit provides a basic framework for synchronous and asynchronous testing
in JavaScript, jquery-mockjax extends QUnit with the capability of mocking
ajax requests and responsens. Selenium provides a comprehensive framework
for automating tasks in the browser, as well as methods for inspecting the DOM.
Using a combination of QUnit, jquery-mockjax, and Selenium we can efficiently
use test-driven development in combination with PhoneGap.
4.6
Field study
The field study was conducted the 5th of October at Artsdatabankens headquarters in Trondheim. The following are our findings after an example species
observation and a talk about species observations in general.
Species are registered individually or in collaboration with two or more individual observers. Observations or Sightings are a collection of species and attributes
about the species such as name, the number of observed individuals, the activity
they were engaged in during observation, age, sex, observation start-end with
date and time, observer, co-observer, and location. Registered data quality is
validated by reputation and bibliometric qualities.
Figure 9: Counting number of different species
Each observations must have a sightings location. Right now the user has to
register the location manually while submitting the observation online. Sighting
location and species picture are non-negotiable features of the system. Location
can be known and already registered or can be conjured up using GPS coordinates. Locations are classified as super and sub-locations. Pictures are used
based on convenience such as only for plants and immobile species candidates.
The customer wants the team to provide full mobility to the observers with
offline data storage and synchronization, and a thorough comparison of the iOS
and Android platform possibilities together with what they stand to gain or
lose in adopting the technologies. The customer also suggested keeping all the
functionalities of the desktop, and breaking down those functionalities on the
mobile device.
32
During any sighting, the recommendation is to use approximate number of
species seen for common species, while being strictly accurate for rare species or
not mentioning a number. Each observation is limited to one type of species, for
example birds, plants or mammals. There will not be any mixing of a number
of species for a single observation.
Figure 10: Observation can include rare and endangered species
4.7
Conclusions
In this chapter we have looked at different technologies and compared them to
each other. We have discussed and concluded on many important decisions for
the ongoing project. This pre-study will help us and our client make decisions
in current and future projects.
In summary, the team will use a modification of the SCRUM methodology, that
will provide the team with the flexibility to adapt to changing requirements
specification, continuous delivery and test-driven development. The application
will be developed using the PhoneGap framework, and it will be published on
multiple platforms with the help of PhoneGap build. For the user interface and
application behaviour, we will use jQuery Mobile.
For version control, the team will use git, and the main repository will be
published on GitHub, as an Open Source application. All documentation will
be written in LaTeX.
33
5
5.1
Requirement specification
Introduction
This section contains: Detailed description of the functional and the nonfunctional requirements for the system, use cases that describe the interactions
the users will have with the application, and the security analysis.
The purpose of this section is to outline the needs of the application. Map
the flow of use to specific requirements and guidelines. It is important for the
group to get a good understanding of the requirements in order to deliver a
good system to the customer. This makes it necessary to write a good and clear
documentation about these requirements, so all the group members can easily
understand them and refer to them.
5.2
Requirements gathering methodology
In system development, understanding the customer’s way of thinking, expectations and basic needs are the first major tasks any system developer has to
tackle. Understanding a customer saves time, results in a happier customer, better reputation, and boosts development productivity[26]. Employing the right
requirements and following up on these should in general keep customers and
developers on the same page. Requirements gathering techniques, if applied
accurately, will serve as a suitable preemptive action against incomplete and
unclear constantly evolving requirements. It also forces both parties hold on to
the legal and contractual agreements between a customer and a developer. Here
is a list of industry proven requirements capturing techniques[31].
5.2.1
Background study
A developer can understand the internal operations, services and cooperation
of a company by reading about the background of the organization. Specialized
tasks, projects, cultures and improvement potential should be researched by
reading company reports, organizational charts, policy manuals, job descriptions
and documentation of existing systems.
5.2.2
Interviewing and questionnaires
The system analyst interviews the personnel of an organization to understand
how they accomplish their day to day activity. It is very advantageous to gather
firsthand information about priorities, objectives and potential improvements
for the system in perspective of all stake holders including employees and management. Even though the information gathered from an interview can be invaluable, it has a tendency to go off topic and be extremely expensive. Questionnaires can be an alternative way of providing a goal oriented investigation
about the operation of an existing system, its current satisfaction among users
and suggestions on how to improve it.
34
5.2.3
Observation and document sampling
Observation and document sampling is a technique that’s effective for capturing
requirements that are impossible to understand through interviews and questionnaires. It will also enable analysts to gather quantitative data on how long
a task takes to complete in real time. However, observation can be problematic,
requirements that involve sensitive information such as private, medical, educational and other protected information can not be handed out to just anyone.
5.2.4
Summary
The customer on hand has an already set up and running web site that helped
the group have an insight as to how the requirements were to be gathered.
The team explored a set of well known requirements gathering techniques as
described above and decided to use background reading, observation, document
sampling and if needed some email based questions or clarifications.
The reason behind these decisions were that Artsdatabanken has a publicly
available page that tells a whole lot of story as to how they make their content accessible to public and this represented a big chunk of the document for
our document analysis. In addition to that, Artsdatabanken organized an actual observation session for the group and everyone of us participated. Thus
the requirements gathering process will strictly follow the procedures of background reading, observation and document sampling techniques. We also found
interviews and questionnaires for an already well documented system, if not
according to our wish, a little bit of a diversion.
5.3
5.3.1
General overview
The process
In our first meeting with the customer we got information about what they
do and what they want from us. Both functional and non-functional requirements were discussed in general. From this our group got an overview of our
objectives. We had some issues to take up with them about a non-functional
requirement; this is described in detail later this section. After this meeting the
group members made a preliminary study to get a better overview of the task.
Our task became more clearer in the second customer meeting. This time the
meeting was on their premises, where we had a field-study and saw how the
application was to be used. Here we got more details on the functional requirements. After the field study we had a meeting with the customer, and discussed
the requirements (functional and non-functional). We talked about functions
that were most important, and made a prioritization label for them.
5.3.2
Project directive
The purpose of this project is to deliver an application that will satisfy the customer’s expectations. This application is meant to be a facilitator for the users
of Artsdatabanken, that will make the registration of an observation simpler
35
and more effective. At this point in time the observer has to make notes (in a
book) while observing, and then enter the noted data on Artsdatabanken’s web
page. The application’s purpose is to replace the notebook, so users can make
notes in the application. This way the observer doesn’t need to spend time
entering the observation data on the web page manually. This will be done by
the mobile application. Saved data on the mobile will be exported to a format
that is easily submittable to the website, whenever the mobile application gets
access to an Internet connection.
To reach this goal, the application needs to be simple to use and have sufficient
functions to be preferred to a notebook.
5.3.3
Target audience
The Application is targeted at both professional users who will ”use whatever
they’re given”, yet have different main focuses than more casual users.
The (main) target of this application will be a group that already are familiar
with Artsdatabanken. In general they will have knowledge of registering data
on the web page, and will therefore not have difficulties with the use of this
application.
Also there is a potential for new users who might start using the services of
Artsdatabanken if given an easy to use interface.
5.3.4
Project scope
The application will be used to enter data of observations. An observation
will have sufficient data fields. It will be possible to make multiple observations.
Stored observations in the mobile can be exported to the web page. Observations
created earlier may also be edited and re-exported.
This project’s scope is to deliver the correct data to the web page. Further
processing, after that, is not covered in this project.
5.4
Functional requirements
The functional requirements are listed in this section. They are meant as a
short description of each requirement, and are further elaborated by the use
cases. This is a dynamic list and will be updated during the project work if the
customer wants more from the application.
5.4.1
Priority
The priorities are set according to the following description.
• H(High) - These requirements are essential for the prototype to work satisfactorily. These requirements will be prioritized first.
• M(Medium) - These requirements are important for the prototype to work
satisfactorily. They are important, but not critical for the prototype.
36
• L(Low) - These requirements are not important for the prototype. They
are ”nice to have”, but will be implemented last.
5.4.2
Complexity
The complexities are set according to the following description.
• H(High) - These requirements are hard to do, and will take a lot of time
to implement.
• M(Medium) - These requirements are possible, but will take some time to
implement.
• L(Low) - These requirements are easy, and can be implemented in a short
time frame.
ID
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
Requirement
User must be able to create a new observation
User must be able to add more information to a species that has been added to
the observation (see use case for details)
User must be able to add more species
to an observation
User must be able to export stored observations so they can be uploaded to
Artsdatabanken later
User must be able to take a picture with
the device’s camera
User must be able to view observations
that are stored on the device
User must be able to edit observations
that are stored on the device
User must be able to update the local
database of species and locations
An observation must contain GPS coordinates of where it was created.
When typing in an observation the application must provide auto-complete
facility for that.
Priority
High
Complexity
Medium
High
Low
High
Low
High
High
Medium
Medium
Low
Medium
Low
Low
Low
High
Medium
Medium
Medium
Medium
Table 7: Functional requirements
37
5.5
Nonfunctional requirements
Non-functional requirements are system quality attributes or constraints that
describe how a system performs its prime functionality. They are characterized
by having no clear cut criteria to determine or their criteria changes depending on the environment and development strategies[26]. Non-functional requirements state a series of quality or constraints that could influence service delivery,
development choice, system owner and end user needs. They are used to specify
the criteria used to judge the overall operations of a system, rather than specific behaviors. The non-functional requirements for this project are elicited in
accordance with the ISO 9126 document recommendation. The ISO 9126 particularly focuses on quality in use of a software system. A short summarization of
ISO 9126 is described in the development process section of this documentation.
5.5.1
Quality of service
Security, reliability and performance
• The system should not force users to wait for longer periods of time
• The system should not lose offline data
• The system should not breach the privacy of an observer
• The system should be durable
Accuracy
• The system should not cause unexpected alteration of input data
• The system should not cause decline of image quality
• The system should accurately export data
Interface
• The user interface should be consistent with Artsdatabanken’s style
• The user interface should have an intuitive design
• The user interface should be sufficiently fast
• The user interface should be optimized for mobile devices
• The user interface should operate smoothly across multiple platforms
38
5.5.2
Compliance
• The system should not violate the usage agreements for target platforms
• The system should comply with device platform recommendations were
possible
• The system should comply with user privacy requirements
• The system should comply with security requirements of the software owners
5.5.3
Architectural design
• The architecture should be clearly defined
• The architecture should not hinder upgrades / updates
• The architecture should adhere to best practices in software development
• The system should be maintainable
• The system should have easily replaceable components
5.5.4
Development paradigm
• The development paradigm should encourage regression testing
• The system should be developed using test driven development
39
5.6
Use cases
We use one actor named ”Observer”. This refers to ordinary people observing wild-life for Artsdatabanken. For a detailed description of the traditional
observation process, see section 4.6.
40
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
Figure 11: Create New Observation
F1
User wants to register an observation
1. User taps the new observation button
2. User selects a species type (bird, bug, etc.)
3. User selects location from list of close locations, or selects
GPS location
4. User adds one species. Helped by auto-complete
5. User saves the observation.
1a. Add number observed to species
4a. Add more info to the species, see Use Case 2
4b. Add more species into observation, see Use Case 3
A new observation has been saved and the user is directed back
to the main menu.
Medium
High
41
Figure 12: Add More Information to Species
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F2
User wants to specify more details about an observation
1. User taps the the species row to bring up the new detail
window.
2. User selects additional info to enter from such categories as
Activity
Age
Sex
Start Date
Start Time
End date
End Time
Comment
Picture
3. User taps OK to get back to the main observation window.
Additional information about an observation has been saved.
Low
High
42
Figure 13: Add another species to observation
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F3
User wants to add another species to the observation
1. User taps ’Add species’ button.
2. Optionally selects another location, otherwise the same one
is selected.
3. User selects species helped by auto complete.
4. User taps ’OK’ to go back to main observation window
Additional species has been added to the observation
Low
High
43
Figure 14: Export observations
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F4
User wants to export their observations
1. User taps the ’Export’ button on the main screen
2. User selects the observations to be exported
3. User taps the export button
Observations are exported in excel or XML format to the user’s
email so they can be imported into the online system later
High
Medium
44
Figure 15: Take picture
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F5
User wants to take a picture to be attached to an observation
The device has a camera
1. User taps ’Take picture’ button.
2. User takes or selects picture
None
Picture is stored on the phone with an easily recognizable filename so it can be attached to an observation later
Medium
Low
45
Figure 16: View observations
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F6
User wants to view locally stored observations
1. User taps the ’View observations’ button.
2. User selects the observation from a list of stored observations
None
None
Medium
Low
46
Figure 17: Edit observations
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F7
User wants to edit a previously created observations
There exists previously made observations
1. User taps the ’View observations’ button.
2. User selects the observation from a list of stored observations
3. User taps the ’Edit’ button
4. User makes desired changes and/or additions
5. User taps save button
Changes are stored on the device
Low
Low
47
Figure 18: Update Database
Requirement(s)
Preconditions
Flow
Extensions
Postconditions
Complexity
Priority
F8
User wants to update the database because new species or locations have been added to the website
1. User taps options button
2. User selects the option to update the database
New Species and locations are stored on the phone to help the
user with auto-complete and correctly choose locations for observations.
Medium
Low
48
6
System architecture
Figure 19: Overall system architecture
We use a layered architecture so platform-specific issues are handled using
PhoneGap as a platform-independent layer above the operating system. Following is a detailed description of each layer, starting at the bottom.
6.1
Layer 1 - Native platform
This layer represents the native operating system for each device (Android, iOs,
blackberry, etc.). Services provided by this layer includes data input/output
operations, geolocation (for some devices), and so on. The operating system is
our interface to the hardware.
6.2
Layer 2 - Cross-platform framework
Layer 2 provides device independent abstractions for file operations, camera
access, geolocation and other multimedia provisions. The primary purpose of
this layer is to allow us to make portable code that can be used on Android,
iOS, and other operating systems with minimal (or no) changes to the code.
PhoneGap provides an HTML5, JavaScript, and CSS interface for the layer
above.
6.3
Layer 3 - Mobile app
Layer 3 is the actual app, this is where our implementation will be placed. This
layer is split into an internal structure where we use the MVC pattern combined
with some layering. In addition to the components shown in the diagram, the
jQuery-family of frameworks and utilities are used as model, view and controller
(MVC) theme.
49
6.3.1
Data access
The data access sub-layer is responsible for all I/O-operations. This is where we
access local and remote storage. The data access layer provides domain centric
functions for accessing common data sources. E.g. ObservationDAO is used to
store and retrieve observations to or from local storage.
6.3.2
Model
The model will be used to represent data used in the app. We can for example
create a class, Observation, to represent all data related to an observation.
6.3.3
Controller
The controller is responsible for coordinating different parts of the system, like
handling (binding) events.
6.3.4
View
This component represents the graphical user interface, it entails code for generating visual effects, and updating the screen. To save work and reduce potential
typing erros, we will be utilizing functionality (like auto-complete, and common
mobile UI functionality) from the jQuery-family.
6.4
Communication
The app will be shipped with auto-complete data (primarily species names)
downloaded from Artsdatabanken’s web services to allow for convenient typing in offline mode. It will also be prepared to communicate directly with
Artsdatabanken’s web services with the purpose of uploading observations and
downloading species information (for the prototype we will use a communication format specified by Artsdatabanken, the actual communication will not be
prioritized as the API for uploading observations is not yet available).
6.5
Conclusions
The architecture was selected with maximum care to take into consideration the
constraints of mobile work environments, which are characterized by a relatively
limited computing capacity, unpredictable disconnections and mobility of user
into and out of connectivity ranges. The architecture tries to balance the load
on the mobile device and acceptable performance of the mobile application.
50
7
7.1
Sprint 0
Sprint planning
For this sprint, we plan to get familiar with the project, and make important
decisions about technologies and development processes. We will conduct a field
study at Artsdatabanken where we will get an introduction into species observation and the process behind it. In addition to this, we will make a prototype of
the user interface in jQuery Mobile to show the customer what our plan for the
application is. This is done to clear up most of the misunderstandings about the
requirements as early as possible, and give the customer feedback about what
our vision of the application is.
Sprint 0 is an introductionary sprint, and will not run for two full weeks.
7.1.1
Expected results
This sprint should provide significant insight into the problem domain, and how
we are going to solve it. The preliminary study is a very important part of
this sprint. In addition, the first demo should be provided, and all important
decisions about technical and logical aspect of the project should be made.
7.1.2
Duration
Start of Sprint 0 is September 4th, and it will last until September 11th (week
36). During this sprint, the field study is scheduled for September 5th. The
scheduled customer meeting is on the same day, and the advisor meeting is on
September 6th.
7.2
7.2.1
User Interface
Overview
The choice of Phonegap for developing a cross compiling mobile application
gave us the possibility to use HTML/CSS for application design. The team was
able to use existing libraries and templates (CSS, javascript, html, etc.). The
JavaScript library, jQuery along with jQuery Mobile, with its touch-optimized
layouts and its good design and simplicity was found to be a suitable choice for
the application development and was therefore chosen.
7.2.2
Choice of Template
Artsdatabanken does have a webpage optimized for mobile phones, which also
uses jQuery mobile. The customer wanted the application to have the same
template as the webpage. To do that the team used the CSS file from the
webpage in the application. This way, the application is managed to look very
similar to the mobile webpage.
51
Figure 20: Similarity of the application (left) and the webpage (right).
7.2.3
Preview of the UI with Screenshots
Main Screen The figure above shows the main screen of the application. The
buttons are clear and the user can simply do a selection. When the user makes
a selection, the selected button becomes blue and the user is sure that the right
button is pressed.
52
Figure 21: Main Screen, Right:Button Clicked
New Observation After selecting New Observation, the page transitions into
the window displayed in figure 22. Here the user selects the Species. Their names
are displayed, with pictures on the side. More species are listed; the user needs
to slide the interface using a finger to see those.
The selection will give a feedback to the user, like in the main screen.
There is a back button on the top left. This button will lead to the previous
window.
53
Figure 22: New Observation, Right:Button Clicked
Bird Observation After selecting the species; bird, the user is given a new
window (figure 23). This is the window where the information about the observation is added.
The user can fill the fields easily by selecting the field that are required to be to
fill. The selected field is marked with a blue shadow so the user knows which
field is selected. The species field has an auto-complete function, which makes
it simpler for the user to type the correct name.
54
Figure 23: Bird Observation, Right:Auto-Complete field
Add More information There are several buttons here for the user. If the
user wants to add more information about the species, the “Add More Information” button is selected. This leads to a new window (shown in figure 24). To
go back after entering new data, the user selects the back button.
55
Figure 24: Add More Information, Right:New window after select
Add New Species If the user wants to add new species, the user selects the
“Add another species” button. This gives a new row in the same window, by
adding new species the window increases its height. The user needs to slide
vertically if many more species are added.
7.3
Customer feedback
At the end of this sprint we held a meeting with the customer on September
13th. The team presented the conclusions obtained in the preliminary study,
in addition to the first demo of the application. The customer agreed upon
our choices and application design, and also gave some useful suggestions about
the colors and design guidelines for Artsdatabanken. PhoneGap proved to be a
good choice for development. As requested in the customer meeting, the next
sprint should focus on auto-completion of species names when making a new
observation.
7.4
Evaluation
This shorter, but not less important sprint, provided very valuable information
about the problem domain. A lot of decisions were made that will continue
56
Figure 25: Add Another Species, Right:Window with the new species line
to guide the rest of the development process. From the preliminary study, the
team now has a good understanding of the problem domain. A lot of important
issues were discussed in detail, such as development environment, programming
and scripting languages, field study etc. All members were involved in this
discussions, including both the advisor and the customer. In addition, the sprint
included some programming, and laid the foundation for further development.
The customer agreed with most of our suggestions, and gave us guidelines for
further development and studies. Documentation was done by the whole team,
and the progress was very satisfactory. However, documenting a lot of things in
a short time led to lower document quality. After consultation with the advisor,
it was decided that a lot of key changes should be made in next the sprint, and
the document should be reorganized.
57
8
8.1
Sprint 1
Sprint planning
In sprint one, we started developing and laying the ground work for more implementation. The customer wanted us to focus on getting auto-complete working
for the first sprint. This would allow us to get the basic functionality of the app
working as soon as possible. We also focused on creating a preliminary system
architecture.
8.1.1
Expected results
From this sprint we expect to have auto-complete working for species names.
In addition, we will continue improving the documentation.
8.1.2
Duration
The start of Sprint 1 is September 12th, and it will last until September 25th
(weeks 37 and 38). The customer meeting is on September 27th, just after the
completion of the sprint. The advisor meetings are on September 13th and
September 20th.
8.2
Requirements
Relevant requirements for this sprint are
F1 User must be able to create a new observation
F3 User must be able to add more species to an observation
F10 Autocomplete
8.3
Implementation
In this sprint, we started implementing the underlying classes and control, as
the use of JQuery Mobile already has done most of our GUI work. The App
consists primarily of main.js, Observation.js and ObsSepc.js.
main.js is responsible for containing code related to startup and transitions
between pages as well as static functions. When the app transitions to the
Observation page a new Observation object is created, with a pointer to it
stored as a global variable in main.js. This is to easily access it through calls
from the UI. When the app leaves the Observation page (unless you transition to
the extended information page) it’s DOM elements are removed to counteract
JQuery Mobile’s functionality of caching pages, in order to make sure a new
observation is created each time that page is loaded.
58
Observation.js holds the functionality connected to the observation itself.
It contains such information as when it was created, it’s unique id, GPS coordinates(NYI) and other helper values like it’s observed species (ObsSpec.js
objects) and states. To add species to an observation, a method in Observation is run which creates a new ObsSpec object, pushes onto a stack stored in
Observation and appends HTML code for it with JQuery.
ObsSpec.js holds functionality connected to one species within an observation. It contains the details stored about a species, along with a unique id within
the observation.
8.4
Testing
Most of the test related effort in this sprint was used to prepare the testing
framework, consisting of QUnit for unit testing and mockjax for mocking AJAX
requests / responses. See section 4.5.2. We decided to run the tests on normal
(desktop / laptop) computers as it was problematic to run directly on mobile
devices. Using techniques like mocking we should still be able to generate a
good test coverage.
We implemented unit tests for the filtering mechanism involved in our custom
auto-complete function. Tests can be found in the source code.
8.5
Customer feedback
The customer is happy with our current progress. However, the customer noticed that we used the wrong API for our species names generation. This API
was much slower, and was optimized for writing information instead of reading
it. In our next sprint, we will use the new API, and also make some modifications for performance reasons.
8.6
Evaluation
The work was completed on time, and all requirements in the sprint backlog
were completed. Both the advisor and customer were happy with our current
progress. The group is working well, and except for some sickness slowing down
progress somewhat, we completed everything in our workload for this sprint.
59
9
9.1
9.1.1
Sprint 2
Sprint planning
Expected results
From this sprint, we expect to have a working local storage on the phone. This
will provide persistence of observations in the phone. In addition, the document
will be ready for the pre-delivery 6th of October.
9.1.2
Duration
Start of Sprint 2 is September 26th, and it will last until October 9th (weeks
39 and 40). Customer meeting is on October 11th, just after the completion of
whole sprint. Advisor meetings are on September 27th and October 4th.
9.2
Requirements
F10 Autocomplete should be able to load data based on species category
F2 User must be able to add more information to a species observation
F6 User must be able to view observations stored on the device
F7 User must be able to edit a stored observation on the device
9.3
Implementation
A species in the observation can now have more information added to it via the
’Extended Information’ page. A click on the details button is caught by JQuery,
the id of the species is found by finding the parents of the button in the DOM,
where a div contains the id. The activeExtended attribute in Observation is
set so that species and the application transitions to the Extended Information
page, values from the species object is read and filled in with JQuery. When
transitioning back to the Observation page, all informations are saved to the
object again and the row of the edited species is updated on the Observation
page.
The user may now view stored observations, a button for this has been made
functional on the main page.
ObservationList.js populates a list of saved observations. The user can select an observation, identified by which species category it is related to and
when it was created as well as it’s id. When an observation is selected, two
global variables related to the id of the observation and it’s speciesgroup are
set and the user is redirected to the Observation page. The observation is read
from storage and loaded into the DOM, editable in the same way as when first
created in order to create a simple and easily recongnizable interface.
60
This uses the same code as when a new observation is created, however when a
new one is created the global variables are unset and a new observation object
is created instead by main.js.
9.3.1
Auto-complete
Auto-complete has been extended to read files based on species category. Initially each category was included in a separate file, but this caused some performance issues due to the large amount of data (browser freeze while reading
files). To circumvent this issue, the team decided to split the categories into
multiple files, keeping one folder per category and one file per prefix see.
To achieve this, following conventions / specifications will be used:
• Auto-Complete data is rooted in the directory WEBROOT/data/autocomplete
• Each species category will get a sub-folder under the auto-complete root.
This will be named after the category id (an integer).
• Each category directory should contain one index.js file that keeps an
overview of all the prefix-files for the category.
• Files in the category directory are named by the following rules:
– a.json contains only entries prefixed by a
– a b.json contains entries prefixed by a or b (can be extended with
more entries)
– a-z.json contains entries prefixed by a, b, c, ..., or z
– a c-e q t contains entries prefixed by a, c, d, e, q, or t
9.3.2
Storage
Storing observations locally on the phone is achieved using the PhoneGap local
storage API, which is based on the W3C Web SQL database specification and
W3C Web Storage API specification. In short, it uses the SQLite database
management system to store data.
The database consists of three tables:
• Observations, which contains the observation id, location, create date, and
the group of the species (bird, mammal, fish, etc. For identifying the type
of observation)
• Species, which contains the actual data of the observation, type of species,
age, sex, etc.
• Pictures, which contains an uri to the images attached to each species
(The image itself is stored somewhere else on the phone, depending on the
device/camera app)
61
Figure 26: Diagram showing the database tables and their relations
The reason the database is split like this is to allow each observation to contain
many species, and allow each species to contain many pictures. See figure 26
for details. The two latter tables have the primary key(s) of the preceding table
as foreign key(s).
The storage is implemented in the ObservationDao.js file, which contains methods to initialize the database, as well as various methods to store and retrieve
entries for observations, species and pictures.
9.4
Testing
We focused primarily on unit testing and informal testing in a ”sandbox” environment. Informal testing consisted of visual inspection of the user interface
and unstructured functional testing (i.e. functional testing that we did not document). The informal testing gave some ideas as to how we could develop and
document our functional tests later in the project, it was also helpful in verifying
that functionality in the frameworks worked as expected.
Module
AutocompleteDao
AutocompleteDao
AutocompleteDao
AutocompleteDao
Autocomplete
Autocomplete
Autocomplete
Autocomplete
Autocomplete
Test description
should set categoryRoot to directory above index.js when loaded
should fail gracefully if loading
file by category fails
should be able to load file based
on prefix
should be able to determine if
term can be completed with current prefix
should return 0 suggestions on no
matches
should return 3 ordered suggestions on exactly 3 matches
should return 6 ordered suggestions on exactly 6 matches
should return 6 ordered suggestions on more than 6 matches
should be able to activate autocomplete for input element
Number of assertions
1
Result
PASS
1
PASS
6
PASS
5
PASS
2
PASS
5
PASS
16
PASS
8
PASS
2
PASS
Table 8: Excerpt from unit testing
62
By ”sandbox” environment we refert to testing on a PC, in addition to using
a debug-section of the app that will be removed before project completion on
mobile devices. By re-running the unit tests from sprint 1 we had efficient
regression testing. This was very helpful in the process of expanding functionality. We have developed a complete set of unit tests for the auto-complete and
storage functionality. Table 9.4 contains an excerpt from the unit testing.
In conclusion, we used unit testing for functionality that has well defined inputand output parameters. Other functionality, like the user interface were tested
by visual inspection and informal functional testing.
9.5
Customer feedback
The customer was happy with our current progress. As this was a big sprint for
us, we couldn’t fully complete all our items in the sprint backlog. The customer
stressed that we start thinking about how to do the export functionality for the
next sprint.
9.6
Evaluation
The implementaion of the main part of the application went as expected, with
the main focus on creating a good storage system and improving the autocomplete from the previous sprint. The creation of the storage system also made
it possible to create sections required in the specifications such as viewing and
editing observations.
Auto-complete now works with satisfactory speed, even on the slower devices.
Splitting the files allowed even the largest files to load sufficiently fast. However,
it is not yet using the correct data. This will be rectified in a later sprint.
After this sprint the storage module allows for correctly and reliably storing
and retrieving observations and their attached species. PhoneGap’s storage
API made it a bit simpler than writing to file, which was our initial approach
to the problem.
63
10
10.1
Sprint 3
Sprint planning
In this sprint, the team worked on rectifying the technical errors in the implementation of sprint two requirements. More work was required to finish the
auto-complete, removing names that are not species. Being able to add new
species when editing an observation and fixing storage on the phone was one of
the modifications required.
10.1.1
Expected results
Getting local storage and auto-complete functionalities completed and deciding
how to best export locally stored observations. The team is expected to divert
its resources more towards getting the system to work on the android platform
and discussing the terms of the usability testing with the customer.
10.1.2
Duration
Sprint 3 will have a duration of two weeks between October 10 (beginning of
week 41) to October 23(end of week 42). The customer meeting was held on
Wednesday October 12, 2011 and the advisor meeting was on October 11 and
October October 18.
10.2
Requirements
Some of the requirements are from sprint 2 and the goal is to make them free
from bugs and issues and focus more on exporting the observation.
F4 User must be able to export stored observation so they can be uploaded to
Artsdatabanken later
F7 User must be able to edit a stored observation on the device
10.3
Implementation
Editing of observations was undertaken in the previous sprint as this was a very
simple and quick extension of the view observations functional requirement.
10.3.1
Storage
We had an issue with Storage not working correctly due to a browser implementation on newer versions of Android. This lead to the application not working
on many phones. In the end, the problem was that Android version greater
than 2.1 does not support null-callbacks in storage functions, which lead to a
fatal error while creating an observation.
64
10.3.2
Export
The main target of this sprint’s implementation was to export the observations
in a format readable by Artsdatabasen 1.0/2.0. This was achieved using a format
similar to csv, but with tab separated values instead of commas. The data is
sent in clear text to an email specified by the user, where the user can easily
copy paste the data into a form on Artsdatabanken system. A PhoneGap plugin,
WebIntent [42], was used for triggering the email event on Android phones. No
solution for iPhone devices has been implemented, however we have observed
that several solutions are available.
Our export system is based on a simple solution where all the fields of a species
is retrieved by the observation object for each of it’s species, printed to a string
and concatenated.
This string is then sent via the WebIntent plugin which sends a JSON object
from JavaScript to the plugin which is written in Java and runs on the android
platform. The observation string is sent as meta-data with a ”SEND” intent
which will prompt the user for an appropriate program to handle it. Typically
an email client as suggested by the customer.
By testing on mobile devices we have ruled out some performance issues and
bugs, at this point the auto-complete is usable and performing within reasonable
delay limits.
10.4
Testing
Previously we had done most of our testing using PCs, in this sprint we focused
more on mobile devices. We uncovered some problems with the storage functionality (see 10.3.1). Additionally we discovered a bug in the auto-complete
that caused it to load excessive amounts of data from the file system.
10.5
Customer feedback
The customer was satisfied with our work and the progress. The customer also
stressed that the team make the export system work, and suggested that the
team also implement things in less complex and practical manner. The customer
considers the exporting functionality a deal breaker or maker of the prototype
and the team was advised to complete that as soon as possible according to the
schedule and expunge existing errors in the implementation so far.
10.6
Evaluation
The sprint generally went well but exporting required some additional time
because our plugin was written for an earlier version of PhoneGap, and including
images proved difficult. Later the team had to put it on the sprint 4 backlog in
front of further work. Sprint 3 has seen about eighty percent completion of the
total requirements, the team’s progress is considered good by both the advisor
and the customer.
65
11
11.1
Sprint 4
Sprint planning
This is the last sprint in this project. The group intended to finish all the
requirements in the sprint backlog and formally close major functionality development. In this sprint, the teams focus is on rectifying previous sprints technical
errors, and completing the export functionality of the application. We will also
look at GPS and camera functionalities.
11.1.1
Expected results
Sprint 4 is the last of our formal sprints, and all requirements will be completed
by the end of this sprint. The expected result is a working export functionality,
in addition to GPS coordinates and working camera functionality.
11.1.2
Duration
Sprint 4 began on October 24th in week 43 and will last on November 6th in
week 44. Sprint 4 has a duration of two weeks just like any of the other sprints.
The team held its customer meeting for this sprint on Tuesday October 25, 2011.
The advisor meeting, scheduled for October 25th was cancelled due to absence.
11.2
Requirements
Some of the requirements are from sprint 2 and the goal is to make them free
from bugs and issues and focus more on exporting the observation.
F5 User must be able to take a picture with the devices camera.
F8 User must be able to update the local database of species and locations
F9 An observation must contain GPS coordinates of where it was created.
11.3
Implementation
An observation can now be updated with GPS coordinates, which is generated
by native geolocation features presented by the PhoneGap API and works on
all supported PhoneGap platforms.
The activity box has been filled with the provided list of valid activities, but
because there are so many, we implemented a select plugin called Chosen [19].
This is a hybrid of select box and autocomplete, but in order for this to function
properly within the jQuery Mobile framework we had to create an invisible
input field, append the html code for chosen and activate it after the page has
loaded. We did it this way in order to avoid jQuery Mobiles restructuring of
GUI elements when creating a page.
In order to allow the user to attach pictures to the observations, two buttons
were added to the details page of the observation. One opens the default camera
66
app of the device, the other opens the default album app. Each returns an uri
to the image selected/taken, which is stored in the database along with the
observation/species IDs (see section 9.3.2 for details). The pictures attached
to a species in an observation are shown on the details page. Pictures can be
removed from a species by tapping them and confirming. However the pictures
themselves are not deleted from the device.
The export functionality was updated to include pictures, the Java plugin was
extended to be able to recieve URIs to pictures on the device. These pictures
are sent along with the observation string so that the email client will include
these pictures.
There was a small issue with including several pictures and not just one. It
seems this is not supported by all email clients on Android and may cause
issues if exporting an observation with pictures if the phone does not possess
such an application. This functionality has been tested and confirmed to work
with the GMail and SonyEricsson email clients.
11.3.1
Auto-complete
Implemented tools for downloading and parsing data from the new API provided
by Artsdatabanken. This required us to change some URL’s and use literal
names instead of ID’s in requests. Using literal names was a bit problematic as
Artsdatabanken is including some special characters in the literal names that
are not actually supported by the API (using commas in a request causes the
API to return an SQL error).
We ran into some problems with the new API however. We could only retrieve
500 species at a time. The logic behind the added requests and the storing of
data on the phone as files is very complex, making it a project in itself. We
therefore chose a different approach by making the application have periodic
updates via the Android Market. We have created a script that can parse the
API and create the appropriate JSON-files for use in the application. This will
also be included in the finished product.
11.4
Testing
Testing uncovered inconsistencies in the storage API. Initially we planned to use
a function , Database.changeVersion(..), to migrate database schemas while updating the app, unfortunately this only works for some versions of Android. Due
to preceding comment and time restrictions we dropped support for database
migration.
We ran functional tests for each of the completed requirements (details in section
4.5). We tested each use case on different devices, after each test bug fixing was
performed. Some of the improvements can be seen in run 1 and run 3, run 3 is
a re-test after fixing bugs in version 0.9 (done on the same device).
At the end all tests passed. Test 5 uncovered a weakness in our user interface, we
have a button named ”Ta bilde” (capture image) that is used to choose images
from the device album, it doesn’t allow users to actually capture an image.
Image capturing is actually done from another section in the user interface
67
(details in section 4.5. To accommodate this we created an additional test,
”Test 1 Fixed”, that verifies the actual image capturing functionality.
11.5
Customer feedback
We received positive feedback from the customer about our final prototype.
While we felt the application was not ready for production just yet, we had
laid the foundation and groundwork for a good application. In addition, the
customer applauded our research into cross-compiling frameworks.
The customer felt the project was a success, and instructed us to continue work
on our documentation for the remaining period.
11.6
Evaluation
At the end of sprint 4 we had completed all of the functional requirements
and exhausted our product backlog. The project was deemed a success by the
customer himself, and we felt that with the prototype now complete, we could
focus on the remaining documentation.
For the work in sprint itself, we did not have many issues. Our biggest gripe was
with multiple file attachments in certain cellphones’ email client. Some did simply not have the functionality to include several attachments. No workaround
was found, but it worked for the standard GMail-client installed on every phone.
68
12
Testing
One of the key tasks that have been planned in this project was testing. In
section 4.5, a detailed purpose, plan and focus of our testing strategy has been
outlined. Having finished the prototype, it is very important that each requirements were tested to make sure that no functionality misbehaves or fails to
perform as expected. Each of the nine requirements were tested and the test
results are documented in this section. This section somewhat describes what
has happened with the pilot prototype, a number of modification have been
incorporated as the testing went on to correct defects. Several tests have been
conducted on each functionality and code section during the sprints, but this
section is the final test result documentation that declares each requirement as
either working or not working.
The result of usability testing is also included as part of this section. The
usability test section tries to show how our prototype fared in the user space
of actual Artsdatabanken users. It gives a preview of how the application has
been received, criticized and modification prompts that arise from users.
12.1
Functionality test results
Test 1 (run 1)
Requirements
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F1 and F10
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Clean install of app on mobile device, no observations stored
A new observation has been saved and the user
is directed back to the main menu
PASS
Could only choose GPS coordinates in step 3,
close locations missing
Table 9: Summary of test 1 (run 1)
69
Action
1. Tap the new observation button
2. Select species type ”Fugl”
(bird)
3. Selects location from list of
close locations, or selects GPS location
4. Start writing ”grågås” in the
”Art” (species) field, ensure that
auto-complete give useful suggestions, choose ”grågås” from list
of suggestions. Write 2 in the
”Antall” (count) field
5. Tap ”Lagre”.
Expected outcome
Menu for selecting species group
appears
Menu for bird observations appears
Location is set
Result
PASS
Auto-complete suggests bird
names, 2 species of type
”grågås” is added to observation
PASS
Observatin is saved
PASS
PASS
PASS
Table 10: Execution of test 1 (run 1)
Test 2 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F2
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed in same test environment,
app is still in observation view
Additional information about an observation
has been saved
PASS
Table 11: Summary of test 2 (run 1)
70
Action
1. Tap ”Detaljer” (details) in the
row containing ”grågås”
2. Select ”Rugende” in the ”Aktivitet” (activity) field
3. Tap back button
4. Tap save and go to main
screen. Go into the observatin
using ”Lagrede Observasjoner”
(stored observations) and verify that ”grågås” still has ”Aktivitet” set to ”Rugende” in the
details view
Expected outcome
Detailed view for the ”grågås”
observation is displayed
”Aktivitet” field populated with
”Rugende”
Main observation view is displayed
Grågås has ”Aktivitet” set to
”Rugende”
Result
PASS
PASS
PASS
PASS
Table 12: Execution of test 2 (run 1)
Test 3 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F3
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed in same test environment,
app is still in observation view
Observation is stored with two entries,
”grågås” (”Antall” of 2) and ”blåmeis” (”Antall” of 1)
FAIL
Could not perform step 2, no place to select
different locations
Table 13: Summary of test 3 (run 1)
71
Action
1. Tap ”Legg til ny art”
2. Optionally selects another location, otherwise the same one is
selected.
3. Select ”blåmeis” with the
count (”Antall”) of 1 in the same
matter as in test 1.
4. Tap ”Lagre”
Expected outcome
A empty species row is appended
New location chosen for current
row
Result
PASS
FAIL
Row 2 is filled inn with (”Artsnavn” = ”blåmes”, ”Antall” =
1)
Observation is saved with two
species observations (”grågås”
and ”blåmeis”)
PASS
PASS
Table 14: Execution of test 3 (run 1)
Test 4 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F4
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 2 and 3 completed in same test environment, app is still in observation view
All data from current observation is submitted
to the native mail client
PASS
This could be more detailed, the export format has not been verified
Table 15: Summary of test 4 (run 1)
Action
1. Taps ’Eksporter’ (export) in
the observation view
Expected outcome
Native email client is launched in
”new email”-mode. Date from
observation is placed in the message field.
Table 16: Execution of test 4 (run 1)
72
Result
PASS
Test 5 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F5
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
App installed on mobile device
Picture is stored on the phone with an easily
recognizable filename
FAIL
Unable to launch picture functionality
Table 17: Summary of test 5 (run 1)
Action
1. Tap ”Ta Bilde” (capture image) in the main view
2. Take picture using native software
Expected outcome
Native image capturing software
is started
Image is stored and success message is displayed in app
Table 18: Execution of test 5 (run 1)
73
Result
FAIL
-
Test 6 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F6
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 2 completed
App is in same state as before test started
PASS
Table 19: Summary of test 6 (run 1)
Action
1. Tap ”Lagrede Observasjoner”
from the main view
2. Select observation 1 from the
list
3. Tap ”Detailer” in the row containing ”grågås”
Expected outcome
A list containing one observatin
(from test 2) is displayed
The observation from test 2 is
displayed in the same state as
earlier (two species)
Details view for ”grågås” observation is displayed, the field ”Aktivitet” is filled in with ”Rugende”
Table 20: Execution of test 6 (run 1)
74
Result
PASS
PASS
PASS
Test 7 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F7
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 6 completed, still viewing stored observation
Observation is stored on device with an additional row containing (”Art” = ”grønnfink”,
”Antall” = 9)
PASS
Table 21: Summary of test 7 (run 1)
Action
1. Add a new row with the same
procedure as in test 2
2. Fill in (”Art” = ”grønnfink”
and ”Antall” = 9) in the new row
using the same procedure as in
test 2.
3. Tap ”Lagre”
Expected outcome
A empty row is appended to the
observation
New row is populated with
(”grønnfink”, 9)
Result
PASS
Observation stored with an additional row (”grønnfink”, 9)
PASS
Table 22: Execution of test 7 (run 1)
75
PASS
Test 8 (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F9
0.9
2011-11-02
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed, still viewing stored observation
App is in same state as before test execution
PASS
It is possible the results came from another
source than GPS, it sufficiently close to the
current location
Table 23: Summary of test 8 (run 1)
Action
1. Tap ”GPS”
Expected outcome
Longitude and latitude contains
coordinates near the current location
Table 24: Execution of test 8 (run 1)
76
Result
PASS
Test 1 (run 2)
Requirements
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F1 and F10
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Clean install of app on mobile device, no observations stored
A new observation has been saved and the user
is directed back to the main menu
PASS
Could only choose GPS coordinates in step 3,
close locations missing
Table 25: Summary of test 1 (run 2)
Action
1. Tap the new observation button
2. Select species type ”Fugl”
(bird)
3. Selects location from list of
close locations, or selects GPS location
4. Start writing ”grågås” in the
”Art” (species) field, ensure that
auto-complete give useful suggestions, choose ”grågås” from list
of suggestions. Write 2 in the
”Antall” (count) field
5. Tap ”Lagre”.
Expected outcome
Menu for selecting species group
appears
Menu for bird observations appears
Location is set
Result
PASS
Auto-complete suggests bird
names, 2 species of type
”grågås” is added to observation
PASS
Observatin is saved
PASS
Table 26: Execution of test 1 (run 2)
77
PASS
PASS
Test 2 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F2
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 1 completed in same test environment,
app is still in observation view
Additional information about an observation
has been saved
PASS
Table 27: Summary of test 2 (run 2)
Action
1. Tap ”Detaljer” (details) in the
row containing ”grågås”
2. Select ”Rugende” in the ”Aktivitet” (activity) field
3. Tap back button
4. Tap save and go to main
screen. Go into the observatin
using ”Lagrede Observasjoner”
(stored observations) and verify that ”grågås” still has ”Aktivitet” set to ”Rugende” in the
details view
Expected outcome
Detailed view for the ”grågås”
observation is displayed
”Aktivitet” field populated with
”Rugende”
Main observation view is displayed
Grågås has ”Aktivitet” set to
”Rugende”
Table 28: Execution of test 2 (run 2)
78
Result
PASS
PASS
PASS
PASS
Test 3 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F3
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 1 completed in same test environment,
app is still in observation view
Observation is stored with two entries,
”grågås” (”Antall” of 2) and ”blåmeis” (”Antall” of 1)
PASS
Step two not included
Table 29: Summary of test 3 (run 2)
Action
1. Tap ”Legg til ny art”
2. Optionally selects another location, otherwise the same one is
selected.
3. Select ”blåmeis” with the
count (”Antall”) of 1 in the same
matter as in test 1.
4. Tap ”Lagre”
Expected outcome
A empty species row is appended
New location chosen for current
row
Result
PASS
-
Row 2 is filled inn with (”Artsnavn” = ”blåmes”, ”Antall” =
1)
Observation is saved with two
species observations (”grågås”
and ”blåmeis”)
PASS
Table 30: Execution of test 3 (run 2)
79
PASS
Test 4 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F4
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 2 and 3 completed in same test environment, app is still in observation view
All data from current observation is submitted
to the native mail client
PASS
The exported text is not detailed
Table 31: Summary of test 4 (run 2)
Action
1. Taps ’Eksporter’ (export) in
the observation view
Expected outcome
Native email client is launched in
”new email”-mode. Date from
observation is placed in the message field.
Table 32: Execution of test 4 (run 2)
80
Result
PASS
Test 5 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F5
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
App installed on mobile device
Picture is stored on the phone with an easily
recognizable filename
PASS
Can choose from earlier taken/saved photos,
cant take a new picture. It wont be usefull
to take a picture which is not assosiated with
an observation, the take picture button in the
main window should be removed .
Table 33: Summary of test 5 (run 2)
Action
1. Tap ”Ta Bilde” (capture image) in the main view
2. Take picture using native software
Expected outcome
Native image capturing software
is started
Image is stored and success message is displayed in app
Table 34: Execution of test 5 (run 2)
81
Result
-
Test 6 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F6
0.9
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 2 completed
App is in same state as before test started
PASS
Table 35: Summary of test 6 (run 2)
Action
1. Tap ”Lagrede Observasjoner”
from the main view
2. Select observation 1 from the
list
3. Tap ”Detailer” in the row containing ”grågås”
Expected outcome
A list containing one observatin
(from test 2) is displayed
The observation from test 2 is
displayed in the same state as
earlier (two species)
Details view for ”grågås” observation is displayed, the field ”Aktivitet” is filled in with ”Rugende”
Table 36: Execution of test 6 (run 2)
82
Result
PASS
PASS
PASS
Test 7 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F7
0.9
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 6 completed, still viewing stored observation
Observation is stored on device with an additional row containing (”Art” = ”grønnfink”,
”Antall” = 9)
PASS
Table 37: Summary of test 7 (run 2)
Action
1. Add a new row with the same
procedure as in test 3
2. Fill in (”Art” = ”grønnfink”
and ”Antall” = 9) in the new row
using the same procedure as in
test 2.
3. Tap ”Lagre”
Expected outcome
A empty row is appended to the
observation
New row is populated with
(”grønnfink”, 9)
Result
PASS
Observation stored with an additional row (”grønnfink”, 9)
PASS
Table 38: Execution of test 7 (run 2)
83
PASS
Test 8 (run 2)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions App
is in same state as before test execution
Result
Comments
F9
0.9
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
Test 1 completed, still viewing stored observation
PASS
Got the fields updated although the GPS was
off. Maybe the location was saved using
WLAN/3G..
Table 39: Summary of test 8 (run 2)
Action
1. Tap ”GPS”
Expected outcome
Longitude and latitude contains
coordinates near the current location
Table 40: Execution of test 8 (run 2)
84
Result
PASS
Test 1 (run 3)
Requirements
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F1 and F10
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Clean install of app on mobile device, no observations stored
A new observation has been saved and the user
is directed back to the main menu
PASS
Table 41: Summary of test 1 (run 3)
Action
1. Tap the new observation button
2. Select species type ”Fugl”
(bird)
3. Selects location from list of
close locations, or selects GPS location
4. Start writing ”grågås” in the
”Art” (species) field, ensure that
auto-complete give useful suggestions, choose ”grågås” from list
of suggestions. Write 2 in the
”Antall” (count) field
5. Tap ”Lagre”.
Expected outcome
Menu for selecting species group
appears
Menu for bird observations appears
Location is set
Result
PASS
Auto-complete suggests bird
names, 2 species of type
”grågås” is added to observation
PASS
Observatin is saved
PASS
Table 42: Execution of test 1 (run 3)
85
PASS
PASS
Test 2 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F2
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed in same test environment,
app is still in observation view
Additional information about an observation
has been saved
PASS
Table 43: Summary of test 2 (run 3)
Action
1. Tap ”Detaljer” (details) in the
row containing ”grågås”
2. Select ”Rugende” in the ”Aktivitet” (activity) field
3. Tap back button
4. Tap save and go to main
screen. Go into the observatin
using ”Lagrede Observasjoner”
(stored observations) and verify that ”grågås” still has ”Aktivitet” set to ”Rugende” in the
details view
Expected outcome
Detailed view for the ”grågås”
observation is displayed
”Aktivitet” field populated with
”Rugende”
Main observation view is displayed
Grågås has ”Aktivitet” set to
”Rugende”
Table 44: Execution of test 2 (run 3)
86
Result
PASS
PASS
PASS
PASS
Test 3 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F3
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed in same test environment,
app is still in observation view
Observation is stored with two entries,
”grågås” (”Antall” of 2) and ”blåmeis” (”Antall” of 1)
PASS
We decided to remove step 2 from the use case
Table 45: Summary of test 3 (run 3)
Action
1. Tap ”Legg til ny art”
2. Optionally selects another location, otherwise the same one is
selected.
3. Select ”blåmeis” with the
count (”Antall”) of 1 in the same
matter as in test 1.
4. Tap ”Lagre”
Expected outcome
A empty species row is appended
New location chosen for current
row
Result
PASS
-
Row 2 is filled inn with (”Artsnavn” = ”blåmes”, ”Antall” =
1)
Observation is saved with two
species observations (”grågås”
and ”blåmeis”)
PASS
Table 46: Execution of test 3 (run 3)
87
PASS
Test 4 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F4
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 2 and 3 completed in same test environment, app is still in observation view
All data from current observation is submitted
to the native mail client
PASS
Table 47: Summary of test 4 (run 3)
Action
1. Taps ’Eksporter’ (export) in
the observation view
Expected outcome
Native email client is launched in
”new email”-mode. Date from
observation is placed in the message field.
Table 48: Execution of test 4 (run 3)
88
Result
PASS
Test 5 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F5
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
App installed on mobile device
Picture is stored on the phone with an easily
recognizable filename
PASS
Uncovered a weakness in our user interface.
The ”Ta bilde” button is not used to capture
images, we have an option for this under ”Detaljer” for each observation. Could rename the
”Ta bilde” button, or better yet, remove it.
Table 49: Summary of test 5 (run 3)
Action
1. Tap ”Ta Bilde” (capture image) in the main view
2. Take picture using native software
Expected outcome
Native image capturing software
is started
Image is stored and success message is displayed in app
Table 50: Execution of test 5 (run 3)
89
Result
PASS
-
Test 6 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F6
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 2 completed
App is in same state as before test started
PASS
Table 51: Summary of test 6 (run 3)
Action
1. Tap ”Lagrede Observasjoner”
from the main view
2. Select observation 1 from the
list
3. Tap ”Detailer” in the row containing ”grågås”
Expected outcome
A list containing one observatin
(from test 2) is displayed
The observation from test 2 is
displayed in the same state as
earlier (two species)
Details view for ”grågås” observation is displayed, the field ”Aktivitet” is filled in with ”Rugende”
Table 52: Execution of test 6 (run 3)
90
Result
PASS
PASS
PASS
Test 7 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F7
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 6 completed, still viewing stored observation
Observation is stored on device with an additional row containing (”Art” = ”grønnfink”,
”Antall” = 9)
PASS
Table 53: Summary of test 7 (run 3)
Action
1. Add a new row with the same
procedure as in test 2
2. Fill in (”Art” = ”grønnfink”
and ”Antall” = 9) in the new row
using the same procedure as in
test 2.
3. Tap ”Lagre”
Expected outcome
A empty row is appended to the
observation
New row is populated with
(”grønnfink”, 9)
Result
PASS
Observation stored with an additional row (”grønnfink”, 9)
PASS
Table 54: Execution of test 7 (run 3)
91
PASS
Test 8 (run 3)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
F9
1.0
2011-11-08
Stian Liknes
Sony Ericsson Xperia X10 running Android
2.1.1.A.0.6 with kernel: 2.6.29
Test 1 completed, still viewing stored observation
App is in same state as before test execution
PASS
Table 55: Summary of test 8 (run 3)
Action
1. Tap ”GPS”
Expected outcome
Longitude and latitude contains
coordinates near the current location
Table 56: Execution of test 8 (run 3)
92
Result
PASS
Test 5 Fixed (run 1)
Requirement
Version
Date
Tested by
Test environment
Pre-conditions
Post-conditions
Result
Comments
F5
1.0
2011-11-08
Muhsin Günaydin
HTC Desire running Android version 2.3.7
with kernel 2.6.37.6
App installed on mobile device
Picture is stored on the phone with an easily
recognizable filename
PASS
Table 57: Summary of test 5 (run 1)
Action
1. Tap ”Detaljer” (details) in the
row containing ”grågås”
2. Tap ”Ta bilde” (take picture)
at the bottom
3. Take picture using native software
4. Tap ”Hent Bilde” (get picture)
5. Select picture from the native
software
Expected outcome
Detailed view for the ”grågås”
observation is displayed
Native image capturing software
is started
Image is captured and displayed
in details for the Art
Native file explorer software is
started
The selected picture is displayed
under the captured pictured
Table 58: Execution of test 5 (run 1)
93
Result
PASS
PASS
PASS
PASS
PASS
12.2
Usability test results
Artsdatabanken contracted a company to carry out a professional usability testing for the prototype we shipped. Artsdatabanken preferred to conduct the
usability test by itself and report the results back to us. However, our customer
managed to sign up only 3 participants for the usability test, a participant
number which is significantly lower than our minimum specification of 20 participants. The plan was to analyse the results using standard statistical result
analysis methods—showing the preference, skill and acceptance of the application which would have been reflected by a reasonable sample size. When, in
statistics, bigger sample size gives better results[38], deriving any conclusions
from a sample size of three will be unrealistic and the conclusion we draw, if
any, will be difficult to rely on. Below are some of the result summarizations
we managed to collect from three of the participants. You can find the usability
form on http://bit.ly/rHDBop
Our usability test was conducted using some sections to test the background of
participants and their overall experience of the application.
In the first section, we tried to understand who the participants were so we can
divide and analyse them by their differences age and degree of Mobile App and
Smartphone experiences. We found out that all of the three participants are in
the age group of 33-44, all of them are also experts in general web application
and owned an Android based smart-phones but a little variation on Mobile App
experience.
Figure 27: Mobile application Experience.
The figure shows that Mobile App experience of the participants seem to be split
equally in Moderate, Good and Expert level of experience. The fact that all of
them are in the same age group makes it harder to forecast how the application
is going to fare among the general real user populations. The usability test was
conducted by participants who owned or had only used Android based devices
and nothing can be said about how the application would have behaved or
received by users from the Apple’s iOS platform.
The second section of our usability test was to find out how the prototype
has productivity, performance, intention to use, learnability, user friendliness,
clarity and popularity of functionalities.
All of the participants found the application easy to learn, easy to use and that
they would recommend it to any new user. Using questionnaire items, we seem
94
to notice that there is some contradiction between the application being easy
to set but 2 out of 3 participants found it troublesome to install. We need a
bigger sample to get a clarify this little contradiction that will actually help to
improve the application.
Figure 28: I found the App difficult to install
In general, the results seem to show that the application has fared positively
on average. Although it is very difficult to draw any conclusion with certainty
figures, the next figures show the different opinions of the usability test participants.
Figure 29: The App is easy to set up (left), Majority neutral for the App has
met my expectation (right)
Figure 30: The App has user-friendly interface
In terms of productivity and intention to use, the majority seem to think that
the application will make them productive and they also intend to use it in the
future.
While all three of our participants liked the functionality New observation, each
95
Figure 31: Majority result showing positive result for productivity and intention
to use
seem to have a least favourite functionality. Export, Take picture and Store
observation have each one participant not favouring them.
Figure 32: Least favourite functionality
For the questionnaire where the application was flawless and runs smoothly, two
disagreed and one remained neutral.
Figure 33: Majority disagree on the application being flawless and running
smoothly
This was the only question we put as an inverse of the rest of the questionnaire
items to see if the application was found confusing to use. And disagreed on the
application being confusing, which means the application was not confusing to
use.
96
Figure 34: The application was found to be not confusing
12.2.1
Summary
With a better number of participants, we could split the participants based
on age, experience and device to figure out how each group will behave using
Friedman[41] and Wilcoxon[47] Tests. User generation related concerns, device
and platform specific problems, technology acceptance problems and other variety of problems will be easier to zone in on with an optimum sample size. The
result from such analysis are much more reliable for suggesting improvements on
future work for the prototype and it will also help us understand the users, and
get insight as to how they perceived the application plus missing functionalities
that could easily have been incorporated to make the application much more
robust.
97
13
13.1
Discussion and evaluation
Group dynamics
In this section we will discuss how the group has worked throughout the project,
and look at some evaluations done within the group about motivation and work
environment.
13.1.1
Week 8 internal group evaluation
By week 8 we had an internal group evaluation of how the group felt the project
was going, the motivation of the group, and an evaluation of each members
work task. We will look at the results from this questionnaire and what steps
we took to improve these results.
Figure 35: Week 8: Question 1 results
In Figure 35 we can see that the group agrees that the project is successful,
meaning that we feel the project process is going as scheduled, and that the
overall motivation is high.
Figure 36: Week 8: Question 2 results
In Figure 36 we see that most of the group members are also happy with their
work tasks so far. We have had pretty static roles with a documentation team
and a programming team so far, and these results tell us that this model has
worked for our group. However, for the last 3 weeks, all team members will be
on the documentation team.
Figure 37 shows that most group members feel that they have contributed to
the group. At our internal review meeting of the results, it was also discussed
98
Figure 37: Week 8: Question 3 results
that the other team members felt that all members of the group contributed,
which helped motivate themselves.
Figure 38: Week 8: Question 4 results
Figure 39: Week 8: Question 5 results
The most important finding of our mid-project review was by far the results
from Figure 39. The consensus of the group was that we had too few meetings,
which halted communication between the team members. This meant that
motivation dropped, or that one part of the group didn’t know what the other
part was doing, leading to the results in Figure 43. This also had an impact
on motivation and progress for the documentation team. We will from now
have weekly meetings each Monday with a more fulfilling update form the week
before, and plan for the next week.
Figure 42 and 43 shows some interesting results. Most of the group is still
motivated about the project, but almost everyone feels that everyone else is less
motivated than themselves. While a loss of motivation as the project progresses
99
Figure 40: Week 8: Question 6 results
Figure 41: Week 8: Question 7 results
Figure 42: Week 8: Question 8 results
Figure 43: Week 8: Question 9 results
is to be expected, this was an interesting finding. We came to the conclusion
that this was because of the lack of weekly updates between the teams, and
sometimes even internally in the teams.
Figure 44 shows the results of the most important question of the questionnaire.
100
Figure 44: Week 8: Question 10 results
As we can see, the results are pretty spread, but for the most part, the group
would like to continue as we do now. After a group discussion, everyone agreed
that with the conclusions from this questionnaire, they would be happy with
our work the rest of the project.
Conclusion Based on our findings and an internal group meeting, we concluded that having weekly meetings every Monday where we recap the previous
week and discuss the plan for the next week would boost the motivation of the
group. This would also help the documentation team better understand what
the programmers were doing, and vice versa. It was also recognized that we
should have had a similar questionnaire at an earlier time, so the necessary
steps needed were taken sooner in the project.
However, we also noted that in general, motivation was high, and the group felt
the project was going well.
13.2
13.2.1
People involved and the course
Customer
One of the things that have been very helpful in this course was having the
customer we had. He went an extra mile to make sure we had a good understanding of what his requirements were, he invested his time to train some of
our group members the trickier system of taxonomy - a scientific species naming
system, and the company organized a field work for us so we could experience
the task in the requirements first hand.
There was very little ambiguity in our communication with the customer. This
Allowed us to relatively quickly define our requirements, which saw only minor
alterations during the project. He gave us the independence to test our ideas,
select our own technology and methodology, only intervening when necessary
and explicitly requested. In addition to that, he was answered any questions we
had by email.
The customer also had a deeper understanding of their existing system, including the weaknesses, strengths and the semantics of the business areas of
Artsdatabanken. He provided helpful insight into using the existing back-end
systems, such as the open API for species offered by Artsdatabanken.
101
Since we had to do every form of communication in English, and none of us
were native speakers of the language, language was one of the things we had to
take into consideration to guarantee reliable and clearer understanding between
everyone involved. Fortunately, our customer had no problems using english, so
we hadd little difficulty in our communication.
We held our customer meeting at the beginning of every sprint in which our
customer actively promoted ideas he seemed to like and suggested improvements. His guidance was very helpful to avoid misguided approaches and fuzzy
implementation procedures.
13.2.2
Advisor
We were lucky to have a responsible advisor whose advice has helped us stay
on track, be goal oriented, keep up with ours and our customers expectations
and on the amount of work to be done each week. His advice were very honest,
professional and comprehensive. He spoke his mind and we used the honest
remarks to make improvements on our work strategy and progress.
The weekly advisor meetings have been key reference points to gauge our progress,
document quality and milestone settings. It also helped us stay motivated
throughout the project, and his experience helped us avoid the biggest pitfalls
of the project.
13.2.3
Team
The team consists of 7 fourth year computer science students. Five students
are enrolled in the Master of Computer Science program here at the Norwegian
University of Science and Technology, and two are enrolled in the International
Master in Computer Science. We feel that the group dynamics has worked out
well, we have had very few internal conflicts. This is likely at least in part due
to our ability to communicate well among ourselves, despite language barriers.
All group members feel that the workload has been evenly distributed, and
we have carefully taken into consideration other classes and distributed work
accordingly, so that no one fell behind in other classes because of the project.
This has proved to be key in the success of the project, because motivation was
high throughout the entire process, making us more effective in completing our
weekly goals.
13.2.4
The course
This course is special in that it required not only skills in computer science but
also a string of skills that we had garnered all along our educational stay in
the university. This course helped us test our fitness for a professional software
engineering atmosphere. We have been able to employ our skills from requirements elicitation and engineering to software design and technical writing. The
course had such an extensive coverage of subjects in computer science it gave
us a means to summarize and reinforce our previous knowledge in information
systems.
102
The course had its own share of challenges. Managing groups, customer communication, advisor guidelines and the actual work itself was something which
tested our skill and patience. At times, we have found the course work intimidating and more than what we were used to. In our case, we were fortunate
to have the right number of group members to keep our work load manageable
throughout the project We experienced that even though some group members
fell ill or were busy with other courses, we still managed to keep the workload
reasonable. This was due to our ability to redistribute our human resources.
The support the course gets from advisor in the form of weekly meetings, the
compendiums at the beginning of the course, and reserves of previous projects
for reference has helped us cope with the challenges. We feel that the course has
helped motivate us to continue our work in this field, and we have gained a lot
of experience in handling internal and external sources of conflict throughout
the project.
13.3
13.3.1
The application
Deliverables
When the group set out on this project, we were all well aware that we were
very likely to meet challenges along the way. To plan for the worst and take
pre-emptive action, we sat together and designed the project plan, a risk analysis and mitigation strategy—should anything goes wrong, a mutual agreement
on implementation technology, a decision on time schedule and other key administrative routines.
Planning administrative and procedural routines ahead of any actual work contributed considerably to keep everyone on the same page. We actually managed
to finish 90 percent of the requirements we elicited together with requirements
the customer approved and accepted.
The group thought it would be best to let people work and focus in the areas
they felt the most confident, so we had those who felt the most comfortable
programming focused on the programming part of the project, and so on. This
we felt contributed to the overall quality of our final product.
The collaboration and common understanding between the group members enabled us to finish a beta application version early and send it for testing to our
customer, the result of which will be presented in another section.
According to our agreement with our customer, the application was to be delivered as a prototype with certain functionalities being non-negotiable part of
the prototyping. We managed to implement all the non-negotiable functionalities and test them. In our opinion, it is safe to say that we completed this
application quite successfully, with 90 percent of our functional requirements
implemented. Growing time pressure and the relatively high complexity and
low priority of this final requirement caused us to decide to document an implementation strategy for it and leave it at that. The customer agreed that this
was the better approach.
103
13.3.2
Final version and deployment
The application we delivered to the customer is only a prototype. It needs a
certain degree of performance tuning for general public use, and also to bring
it up to a professional par.
The final version of the application we published to our customer does the
functionalities specified in the requirements. The application is a cross-platform
mobile application that can be run on Android and iOS based mobile devices.
We have actually not been able to test it on i-family devices, due to the fact
that none of us had such a device and the assumption was for the customer
to provide an i-family testing device. But the customer failed to provide the
device and recommended we focus our testing effort on Android based devices
and forgo iOS based testing.
13.3.3
Customer response
The customer was made to be involved actively in the process and he was
consulted from the beginning to the end of the project. We made sure he knew
what was going on in the technology, mode of implementation and our schedule
to remotely eliminate unexpected surprises.
All along, the customer’s feedback was positive and encouraging. At this point
we are unaware of anything that might have gone against the expectations of
the customer. We have included his suggestions and recommendations and any
changes and modifications took place with his opinions factored in.
We believe that we managed to satisfy our customer’s expectations and that he
had no reservations with our final prototype.
13.4
Development process, methodology and work flow
We started on this project as part of a course work for TDT4290 Customer
Driven Project. In this project, we have been able to walk through the processes of software development efforts an average software engineer would have
gone through in an industrial setting. We have also tried to use our own experiences and practices to improve upon industry accepted standards. This gave
us an invaluable insight into how we handle, schedule, manage, trace and carry
out tasks in software development projects and below is an evaluation of our
experience in this project.
13.4.1
Development methodology—Scrum
With scrum being a widely used software development methodology these days
plus the fact that all the team members had some sort of experience with it
helped save a considerable amount of time we would have spent learning the
basics of scrum. However, adopting pure scrum was not reasonable since all
team members had responsibilities in other courses with demanding projects,
and attending five days of stand up meeting consistently by all group members
was unrealistic. Scrum is also complex administrative wise and we felt that
adopting pure scrum would take focus away from the actual project.
104
To make sure such things went smoothly, we had to get together early and
decide on what methodology to use, or alternatively, what parts we would like
to employ from a given methodology.
We decided to hold a stand-up meeting, a time slot when everyone will be able
to inform other members of self-completed task and plan for the day ahead,
to keep everyone up to speed with the project progress. We also fixed sprint
lengths to maximum of two weeks. We found this to be very effective in terms
of motivation and progress for the project. We borrowed the term product
owner and scrum master from the SCRUM methodology, and negotiated with
the customer to have just one person making the calls about the project.
One of the most challenging things to do was to wake up early and show up every
Monday to Thursday morning at 9 to work. Some group members work late
into the night and making it early in the morning was a bit of an inconvenience.
However, we decided to stick to the plan and that has helped us complete a
greater portion of requirements gathering and analysis task early for approval
with our customer.
We did not use a SCRUM board for our project. Instead we depended on
oral communication and work tasks for a given day was given at each standup meeting. At times, we had to explicitly assign tasks if we felt that some
unpopular tasks were being bounced around. These were often given a deadline
for completion, and each team member committed to completing their work
tasks they set for themselves each day.
In general, the flexible division of work and the consistent stand-up meeting
worked out great, and we did not feel that the methodology took time from
actual development and documentation.
13.4.2
Implementation
The implementation would have been simpler and perhaps faster had it been an
application that was going to be deployed on either Android or iOS platforms
as native application.
But the requirement was that the application should have one code base and
run on both platforms. It is a cross-platform application. None of us had any
experience with that sort of thing, so the natural thing to do was to make
sure every one of us evaluated some framework for the implementation of the
requirement. A number of frameworks came on the table and we had to weigh
one against another. We selected the relatively new PhoneGap for many reasons,
as specified in detail in earlier sections of this project document. Learning the
new framework, getting to know existing API’s from our customer and on how
we were going to integrate our solutions to the customer requirements provided
its own challenges. And we also wanted to develop an application that not
only did the functionalities but also have some elegant interface which is less
confusing and appealing to end users.
The use of PhoneGap in collaboration with JQuery Mobile made the design of
our UI very simple and gave us additional time for shaping the application itself.
Our customer also provided us with a style-sheet used in their on-line touch device portal so that the designs would match. Although being a great framework
105
for our purpose, PhoneGap also had it’s shortcoming. Because JavaScript is
a script-language and PhoneGap is run in the web-application of the mobile
device it’s performance is limited compared to a standalone native application.
This downside is also observable in our application, the responsiveness and the
page transitions are noticeably slower than what would’ve been the case for
a native application. However we believe this is within reasonable limits and
further improvements could possibly make it faster.
The implementation took place in line with our requirements elicitation, working
first on the highest priority and starting early on high complexity requirements.
The implementation needed a collaboration between the group members who
were investing so much effort on the codes and we used GitHub for listing issues
and tracing changes.
In general, we had exactly 10 requirements of which four were high priority
and one with high complexity. Nine of the ten requirements were successfully
implemented according to the customer’s requirements. The final requirement
of updating the internal database of species names was quite complex. It also
arguably goes against mobile application conventions so it wasn’t feasible to
implement within our scheduled time. The requirement wasn’t completely overlooked however, a completed product that will do it’s purpose has been created,
the database update can be made by the app publisher. So that the individual
phones can update their database by updating the app in the Android market
or it’s equivalent on other platforms. (see ”Use of API” in section 14.1.1)
13.4.3
Time estimation
This project lasted from week 35 until week 47, nearly 13 weeks with each
person expected to contribute 25 person hours per week. The course requires
a workload of a minimum of 1625 and 1950 person hours including everything
from project conception to final delivery and presentation. Our plan was to
use 2275 person hours with each one of us expected to put in 375 hours for
the project. During this time, we decided the work into three blocks of time.
Pre-study, implementation and documentation.
For the first three weeks, we spent most of our time doing requirements gathering, background study of the problem domain and decided methodologies and
technologies to be used for the project.
The next eight weeks, we did 4 sprints of two weeks each. For each sprint we
created a sprint backlog, completed it and documented our efforts. We had
planned for 350 person hours for each sprint, but we found that we used less
time than that completing our sprints.
For the final two weeks we reflected on our work, wrote the discussion and
evaluation chapter, and worked on how the project could be improved from
here. We allotted 350 person hours for this reflection, and found that we needed
most of it to completely reflect and document our efforts. We packed more
hours early in the project, allocating 175 hours for each half sprint week and
later fluctuating time allocation decisions based on the work in the backlog, the
group has invested about 1934. Our time management paid off well.
106
Figure 45: Weekly time consumption
Day to day work-flow Our usual work-flow started with a stand-up every
morning at 09.15 at Drivhuset. We spent about 15 minutes discussing the
previous day, and the tasks ahead of us. After this, we sat together and worked
most of the time, and let the group members that needed to attend other lectures
go when needed. Each group member committed to completing his work tasks
set by the stand up meeting that day, but how and when he did it, was up to
each group member himself. On average, each day consisted of 6 hours of work,
with Fridays off. We found this method of working very efficient, and no one
felt overworked or demotivated because of it.
Seminars and meetings In addition to the project, we also had to attend
seminars and meetings. Every Tuesday at 14.00, we had seminars in group
dynamics and project management. These seminars took an average of 2 hours
each week during the project.
We also had a mandatory advisor meeting at 13.00 every Tuesday. This proved
invaluable to the project success, and our advisor helped us greatly.
13.4.4
Risk evaluation
One of the things we were very concerned about was time management. It is
particularly noticeable from this evaluation section that we had been a little
bit too fixated on time. We realized from the beginning that our time was of
107
the essence and knew what would have happened if we wasted it on inadequate
planning, scheduling and other complications. To avoid this risk, we analyzed
our risk so it can help us deal with time wasting situations as early as possible.
Group members were encouraged to take over lagging task and finish it at any
time, even if it was assigned to someone else. We used our time in the most
efficient way and it helped us deal with the stresses that came later as the other
courses’ demands started to get their climax—as it naturally does in the later
part of every semester.
Failure to provide an apple testing device was one the risk with risk rate of
”Medium”, which means nothing like a catastrophe. Relatively, we focused on
averting risk than letting them happen and deal with them.
The other risk which materialized in the project was our usability test. This
was beyond our capacity and means to have stopped it. The plan was for the
customer to provide us with users or according to the customer wish to conduct
the usability test by themselves and bring forward the result to us. But none
of that happened and the risk quantification for usability test was Low and did
not disrupt any thing during the project.
13.4.5
Seminars and study process
The group attended a lot of seminars organized by IDI for this course. Some of
the most important seminars were:
Group dynamics - one of the first and maybe most important seminar
showed values of good team work and importance of correct communication
inside the team. It helped people in group to understand mutual differences
and to achieve better communication without knowing other persons before.
Also, it was nicely explained how team work should lead to better efficiency
and more productivity, and it is always good to hear something about that
from experienced people working with big groups for a long time.
Project management - this seminar revolved around efficient management
of big project like this one. We learned about how a project should be defined,
how to find the goal objectives, and starting points for the development process.
Scrum, an agile development method - Considering scrum as one of the
most used method for developing software in past years, everyone had basic
knowledge about it. The group felt that this seminar came little late for any
major change in our plan.
Technical writing - This seminar, held by prof. Nancy Lea Eik-Ness thought
us about the flow of articles and documents for this project. This lecture was
held after our preliminary delivery, and proved valuable for the quality of our
final report. In addition to this, we also got valuable feedback on our own
report.
108
Presentation techniques - This seminar focused on presentation technique,
which proved valuable to the project before the final presentation of the project.
These seminars were invaluable to the success of our project. It motivated us,
and gave us important insight into the group dynamics and goals of the project.
109
14
14.1
Conclusion and Further Work
Developers guide
This section will in details help maintaining developers understand and use the
code we have produced during the course of this project. It will also include
comments about the ideas we have for the further development of this app.
14.1.1
Further work
At this time this is a simple application for creating and submitting observations. Improvements in design and functionality shouldn’t be an issue with the
current framework, and the potential is present. First priorities might be the
incorporation of a direct submission API towards Artsdatabankens services, to
avoid having to export through mail and import again.
The export function is an Android plugin for PhoneGap, it functions by sending
the contents of the observation along with images attached to an email client
on the phone. In it’s current version if images are added to an observation that
is to be exported, the phone will only list email clients that are able to receive
multiple attachments. This can be circumvented by zipping the images before
sending them to the email client, or by the use of a direct API as previously
mentioned. A different but less preferable solution is to have an option to disable
exportation of images. In order for exporting to be functional on iPhone or other
devices, an equivalent plugin for those devices are required to be implemented.
We have confirmed such plugins exist but have not further investigated them.
Optimizing screen use might also be an idea, at the time objects are fairly large
and space consuming, but this is more of a design philosophy from JQuery
Mobile and is debatable.
The current version is also largely focused on the interface preferred by bird
observers, while trying to be universal it might be necessary to further customize
the different layouts or stored fields for observations of different species groups.
Adding a section of the application for viewing species and obtain more information about them was a subject from the beginning of the project. But it
was of less importance and quickly deemed outside the scope of the original
application.
At the current time most object methods are implemented functionally and not
by using the ”prototype” construct of JavaScript. At this point there isn’t many
concurrent objects in use at the same time so it shouldn’t be much of an issue.
However as the application progresses this should most likely be changed to
improve performance.
Use of API
We used the TaxonomyPropertySearch part of Artsdatabankens webtjenesterbeta API [6] to download species names for the auto-complete. By inputing
species-categories (using speciesGrouping=NAME, where NAME is a Norwegian category name, like ”fugl”) to the API we were able to download lists of
110
species names, these were returned in an XML format. To simplify the app (and
save storage space), we parsed the XML into JSON and shipped the data as a
part of the app. We used the following XSLT-transformation to parse the data:
<x s l : t e m p l a t e match=”/”>
<x s l : apply−t e m p l a t e s
s e l e c t =”//adb : s c i e n t i f i c N a m e |
// adb : vernacularName ”
/>
</ x s l : template >
<x s l : t e m p l a t e
match=”//adb : s c i e n t i f i c N a m e |
// adb : vernacularName ”
>’< x s l : v alue −o f s e l e c t =”.” / > ’ ,
</ x s l : template >
The implementation of our final functional requirement that involves being able
to update the internal database of auto-complete names is also high on the
list of further work. So that users can ”sync” the species listings at their own
convenience. In order for this to be less demanding on the phone and less of an
implementation challenge it is strongly suggested that Artsdatabanken supplies
a refined API for this. Meaning one that offers the exact data needed so no
re-parsing of large XML files is needed locally.
If not, an obvious extension is to parse using JavaScript in the app itself. It is
a fairly simple transformation that can be implemented using jQuery-selectors.
Currently we store one file per category, each file contains a function named
”autocompleteData” that returns a list of names. These files could be updated
using PhoneGap’s File API [36].
Code repository
The code and documentation from this project can be found in its entirety at the
following URL: https://github.com/cdproject8/Artsdatabanken. Included
here is a complete revision history for our entire project, all of our documentation excluding meeting minutes and agendas, in addition to the app itself. The
application is ready to compile from this source.
For further work, we recommend you fork this project. The project is licensed
under Creative Commons Attribution-ShareAlike 3.0 Unported, and all further
work should also be licensed under the same or similar licenses.
Maintenance
The primary challenge is to keep the local repository of species names updated,
when new names arrive the app needs to be updated using methods mentioned in ”Use of API”, this can be avoided by implementing download/parsingfunctionality in the app itself so that users can update the local repository
without changes in the app.
111
The app is not strictly dependent on any APIs at this stage, so it should not be
a big issue with changes made in Artsdatabanken’s backend. Developers should
still keep this in mind, the export feature should adhere to the Artsdatabanken’s
import functionality.
The app use a native wrapper for the image functionality, this is dependent on
Android and should be considered when new versions of the platform arrives.
The same applies for export functionality, the app is using the native email
client for sending observations.
To stay current, the frameworks should be updated on a regular basis (jQuery,
jQueryUI, PhoneGap, jQueryMobile). As jQuery, Phonegap and jQueryUI is
in a more or less stable state, upgrading these should not affect the app much.
Upgrading jQueryMobile may prove difficult, the framework is in a beta state
and you can expect big changes. Take care to read the change logs and do
excessive testing if you decide to update jQueryMobile, you may have to change
parts of the app to adhere with later versions.
14.2
Conclusions
The goal of this project has been to teach us software engineering and teamwork
skills in the context of a development project to make a realistic prototype of
an information system for a real world customer. Overall we feel the project
has been quite successful in this regard.
We feel the project has been helpful in preparing us for a real software development environment. We had very few problems in our communication with our
customer, and in the end we left our customer satisfied with our work. Particularly our pre-study and research into the various cross-compiling frameworks
that were available.
The course also provided valuable experiences in group dynamics. Our group
has been quite diverse in both culture and experience, which has required us to
deal with issues like role allocation, work load management, and other group
management skills. We have had very few internal conflicts in our group, something that no doubt has been a big advantage.
The result of our work is a fully functional prototype. We managed to fulfill
all but one of our functional requirements. However, our main non-functional
problem was that our app would be able to compete with the previous best
method of observing species: using a pen and notebook in the field, and then
manually entering the information into the artsobservasjoner.no website.
Unfortunately, the frameworks that allowed us to quickly develop a cross platform app (PhoneGap, JQuery Mobile) also made our app quite sluggish at
times. However, our app offers several advantages over the notebook, such as
auto-complete of species names, simple entry of dates and times, geolocation
data and built in picture functionality. The data is also exported in a format
that is easily imported to the website, significantly reducing the amount of time
needed in that part of the observation process.
In the end, we are convinced that with some additional work on optimizing
our frameworks and perhaps with a future possibility of uploading observations
112
directly or otherwise streamlining the export/import process, our app could
substantially ease and speed up the process of species observation.
113
A
Project directive and templates
A.1
A.1.1
Contact information
Customer
• Askild Olsen
Telephone: 73 59 21 93
Mobile: 91 78 34 89
Fax: 73 59 22 40
E-mail: [email protected]
• Helge Sandmark
E-mail: [email protected]
• Nils Valland
Telephone: 73 59 23 01
Mobile: 92 41 20 37
Fax: 73 59 22 40
E-mail: [email protected]
A.1.2
Supervisor
• Muhammad Asif
Telephone: 73 59 36 71
E-mail: [email protected]
A.1.3
Team members
• Anders Søbstad Rye
E-mail: [email protected]
• Andreas Berg Skomedal
E-mail: [email protected]
• Dag-Inge Aas
E-mail: [email protected]
• Muhsin Gnaydin
E-mail: [email protected]
• Nikola Djoric
E-mail: [email protected]
• Stian Liknes
E-mail: [email protected]
• Yonathan Redda
E-mail: [email protected]
114
A.2
Meeting agendas
Figure 46: Meeting agenda
115
A.3
Meeting minutes
Figure 47: Meeting minutes
116
A.4
Weekly status report
Figure 48: Weekly status report
117
B
User guide
This portion of the document details installation guides for the user, in addition
to a how-to for the application. This should provide the user with adequate
information about how to install and use the application.
B.1
Installation guide
The application can be downloaded from the Android Market, with the name
”Artsdatabanken”. It can also be installed directly from an executable installation package, an apk, by going to the following url: http://stuff.daginge.
com/artsdatabanken.apk.
Figure 49: The program icon after installation
118
B.2
B.2.1
How-to
Creating an observation
The user creates an observation by following the ”Create an observation/Ny
Observasjon” link from the front page of the application. From here, the user
selects which species group to observe. This will lead the user to an observation
table, where the user can input species name, which is auto-completed, and
the number of individuals found. Both common names and scientific names are
applicable.
Figure 50: Front page (left). Select Species (right)
119
Figure 51: Observation window (Auto Complete)
B.2.2
Gather location from GPS
From the observation page, press on the ”GPS” button and update the Longitude and Latitude by pressing ”Update GPS/Oppdater GPS”
Figure 52: Gather location from GPS
120
B.2.3
Add additional information
From the observation page, the user can choose to add more information about
the species observation by selecting ”Add more information/Detaljer”.
All fields are available for editing. Special input boxes for time and date selection
will open if date or time fields are edited.
Figure 53: Observation page (left). Details page (right)
121
B.2.4
Add pictures of a species
At the bottom of the Extended information page the user may also include
images from disk or camera, if the user clicks on an already existing image
he/she can remove the picture from the observation again. The added picture
will be displayed in the window.
Figure 54: Add pictures of a species
122
B.2.5
Adding additional species to an observation
The user may also add additional species to an observation via the ”Add additional species/Legg til ny art” button in the observation window. This will
create a new row for that species.
Figure 55: Adding additional species to an observatio
B.2.6
Storing an observation
To store an observation simply click the ”Save/Lagre” button at the bottom of
the observation.
123
B.2.7
Editing a stored observation
To edit an existing observation the user can select ”Saved observations/Lagrede
Observasjoner” from the front page. This will list all saved observations on
the device. These observations are identifiable from it’s species type, date of
creation and an id assigned to it.
Figure 56: Front page (left). Stored Observations (right)
B.2.8
Exporting an observation
To export an observation the user can click on the ”Export/Eksporter” button
at the bottom of the observation page. This will open up a menu to select
which application to use for exporting. THe recommended choice is the GMail
application.
The user is free to send the observation to any email address, and the received
email can be used on the web-page of Artsdatabanken by copying the entire
email body and pasting it in the import from XSL section. Any images linked
to the observation will be attached to the email.
B.2.9
Deleting an observation
An observation can be deleted by clicking the ”Delete/Slett” button at bottom
of an observation.
124
C
Glossary
API Application Programming Interface, interface with specifications for programs to communicate with eachother.
COTS Commercial off-the-shelf
div
A local grouping of content in the HTML.
DOM Document Object Model, an object representation of HTML elements
in order to interact and manipulate them.
GPS Global Positioning System, satelite system for determining position and/or
speed of an object.
GUI Graphical User Interface, the observable part of the application, created
for interacting with the user.
HTML HyperText Markup Language, markup langauge of web pages, HTML
elements are the building blocks of web pages.
NYI
Not Yet Implemented.
SQL Structured Query Language, programming langague for communicating
with database systems for storing information.
125
References
[1] https://market.android.com/details?id=com.
USBirdingChecklistDemo\&feature=search_result.
[2] Appcelerator. Training and documentation. http://www.appcelerator.
com, 2011.
[3] appendto.
jquery-mockjax.
jquery-mockjax, 2011.
https://github.com/appendto/
[4] Artsdatabanken. http://www.artsdatabanken.no.
[5] Artsdatabanken. About - artsdatabanken. http://www.artsdatabanken.
no/Article.aspx?m=5&amid=89, 2011.
[6] Artsdatabanken.
Taxonpropertysearch - artsdatabanken.
http:
//meis.artsdatabanken.no/webtjenesterbeta/databank.asmx?op=
TaxonPropertySearch, 2011.
[7] Creative Commons. Creative commons attribution-sharealike 3.0 unported
license. http://creativecommons.org/licenses/by-sa/3.0/.
[8] The World Wide Web Consortium. Xsl transformations. http://www.w3.
org/TR/xslt, 1999.
[9] Corona.
Corona resources.
resources/docs, 2011.
http://developer.anscamobile.com/
[10] crockford.com. Code conventions for the javascript programming language.
http://javascript.crockford.com/code.html, unknown.
[11] Emam Hossain, Muhammad Ali Babar, Hye-young Raik. Using scrum
in global software development: A systematic litrature review. 4th IEEE
International conference on Global Software Engineering, 2009.
[12] R.P. Bagozzi F.D Davis and P.R.Warshaw. User acceptance of computer
technology: a comparison of two theoretical models. 1989.
[13] International Organization for Standardization. Iso 9126.
[14] Richard Gaywood. The gpl, the app store, and you. http: // www. tuaw.
com/ , 2011.
[15] John Zahorjan George H. Formann. The challenges of mobile computing,
1994.
[16] Rohit Ghatol.
Automated testcases for phonegap.
http:
//wiki.phonegap.com/w/page/36886195/Automated%20TestCases%
20for%20PhoneGap, 2011.
[17] Social Coding GitHub. Repository model. https://github.com/explore,
2011.
126
[18] Mark H. Goadrich and Michael P. Rogers. Smart smartphone development: ios versus android. Technical report, Mathematics and Computer
Science Centenary College of Louisiana and Computer Science Information
Systems, Northwest Missouri State University, 2011.
[19] Harvest. Javascript select box plugin. http://harvesthq.github.com/
chosen/, 2011.
[20] IDI. Compendium: Introduction to course tdt4290 customer driven project,
autumn 2011.
[21] Pankaj Jalote. A concise introduction to software engineering. Springer,
2008.
[22] Dan Chisnell Jeffrey Rubin. Handbook of usability testing - how to paln,
design and conduct effective tests,2nd edition. 2008.
[23] jQuery. jquery documentation, 2010.
[24] jQuery. Qunit. http://docs.jquery.com/Qunit, 2011.
[25] Tricia Oberndorf Lisa Brownsword, David Carney. The opportunities and
complexities of applying commercial-off-the-shelf components. Crosstalk,
1998.
[26] Leszek A. Maciaszek. Requirements analysis and system design, 2007.
[27] Brian Still Michael J. Albers. Usability of complext information systems evaluation of user interaction. 2011.
[28] Inc National Audubon Society. http://www.audubon.org/, 2011.
[29] Project NOAH. About ebird. http://ebird.org/content/ebird/about.
[30] Project NOAH. About noah. http://www.projectnoah.org/.
[31] Mike O’Docherty. Object oriented analysis and design, understanding system development with uml2.0, 2007.
[32] Members of Wikipedia. Apache subversion. http://en.wikipedia.org/
wiki/Apache_Subversion, 2011.
[33] Members of Wikipedia. Git (software). http://en.wikipedia.org/wiki/
Git_(software), 2011.
[34] Members of Wikipedia.
GitHub, 2011.
Github.
http://en.wikipedia.org/wiki/
[35] PhoneGap. About - phonegap. http://www.phonegap.com/about, 2011.
[36] PhoneGap. File api - phonegap. http://docs.phonegap.com/en/1.2.0/
phonegap_file_file.md.html#File, 2011.
[37] Target Process. Scrum. http://www.targetprocess.com/scrum.aspx,
2011.
127
[38] S. S. Shapiro, M. B. Wilk. An analysis of variance test for normality
(Complete Samples). Biometrica Trust, 52(3)(4) pp. 591, 1965.
[39] SeleniumHQ. What is selenium. http://seleniumhq.org, 2011.
[40] Hung-Pin Shih. Extended technology acceptability model of internet utilization behaviour. 2003.
[41] Shlomo Sawilowsky, Gail Fahoome. Encyclopedia of Statistics in Behavioural Science. John Wiley and Sons, Ltd, 2005.
[42] Boris Smus. Webintent, a phonegap intent plugin for android. http:
//smus.com/android-phonegap-plugins, 2010.
[43] SQA. Software quality assurance. http: // www. sqa. com. com .
[44] Faisal F. Al-Thanid Tony Mema. Corporate risk management, 2008.
[45] Kshirasagar Naik Priyadarshi Tripathy. Software testing and quality assurance: Theorey and practice. Wiley, 2011.
[46] ITU:International Telecommunications Union. http://www.itu.int.
[47] Vilijandas Bagdonavi, Julius Kruopis, Mikhail Nikulin. Nonparametric
Tests. John Wiley and Sons, 2010.
[48] Chin-Chao Wen-Chih Chiou, Chyuan Lin Perng. The relationship between
technology acceptance model and usability test. 2009.
[49] Wikipedia. Android (operating system). http://en.wikipedia.org/
wiki/Android_%28operating_system%29, 2011.
[50] Wikipedia. ios. http://en.wikipedia.org/wiki/IOS, 2011.
[51] Wikipedia. Latex. http://en.wikipedia.org/wiki/LaTeX, 2011.
[52] Wikipedia. Software development process. http://en.wikipedia.org/
wiki/Software_development_process, 2011.
[53] Wikipedia. Software testing. http://en.wikipedia.org/wiki/Software_
testing, 2011.
[54] Wikipedia. The waterfall model. http://en.wikipedia.org/wiki/File:
Waterfall_model.png, 2011.
[55] Graham M. Winch. Managing construction projects. Wiley-Blackwell,
2010.
128

Benzer belgeler