For the second year in a row I’ll be speaking at the Mobile Dev+Test conference in San Diego from 24-28 April 2017. The conference is a week-long Learning How to Develop and Test for Mobile Devices.
In my tutorials I’m focussing on REST API testing. The Application Programming Interface (API) is a service and a crucial element in the success of any mobile application. The key question is: “How can I test an API behind an application?”
In the morning we start with the introduction on learning what REST API’s are and how to test them and in the afternoon we assumes that you already know what API’s are and from the foundation concepts you want to improve your testing skills.
I hope to see you at the conference and if you like a discount use my discount code MDT17MV40 get a $ 200 off.
How-to test wearable app’s
With every presentations I give about testing wearable apps people look at me as if it is an idea of the future, not on their horizon jet. Also many organizations are still working on the transition form traditional websites to responsive web and mobile apps.
But if I look around there are already many experiments with this new category of devices. In this blog I explain how-to get started with testing wearable apps. It starts with looking at some examples of experiments and then in three steps (Ready, Set, Go) I take you to my wearable test approach and finish with my most important lesson in the conclusion: wearable app testing and testing wearables in more general starts with testing accessibility.
Examples of wearables
Here are some examples of experiments that are now running:
Here is also a cool example Run-N-Read of Weartrons form the Dragon Innovation startup platform. This wearable device make it possible to tracks your head movements and then moves the text on the screen in real time to always be in sync with your eyes.
The purpose of wearable technology is to create constant, convenient, seamless, portable, and mostly hands-free access to electronics and computers, see wearabledevices.com (http://wearabledevices.com/what-is-a-wearable-device/)
I started testing with a first generation smartwatch Moto 360 from Motorola and a first generation Apple watch, because Im a runner so I can use it everyday to gain more experience. Beside this personal motivation I also think the smartwatch will be the digital hub for all the wearables on my body. As I have learned a wearables are sensors on my body to collect data, which are sent via an API to the cloud and controlled in a mobile app. With the smartwatch I can have a realtime feedback while Im running.
Ready, Set …
Questions that I had were: “What are wearables?”, “What is the difference between mobile apps and wearable apps” and most important “How can I test a wearable?”
Wearables testing is about validating if and how (simple) the wearable is extending the users digital experience, give more capabilities to control its environment and can monitor its vital state. This gives a starting point on what the value is for the end-user.
If you see the criteria form Apple and Google about how to develop a wearable app then it come down to the following list:
But I found many “lists” that point out important factors for wearable apps. See for example:
Ready, Set, Go … or not
When using my Moto 360 and Apple Watch I experienced that most apps where to complex to use in 3 seconds, require to much attention and where not helping me during my current activity. For example “please stand up, while I’m driving (monitoring)”, “Four steps to add a task in Trello but it’s not asking me for witch project (extending digital experience)”, “ (Turn down volume of the music player but that can only be done in the general music app (control environment)“.
When using the smartwatches it irritated me that I needed to give to much attention to the app. To align the wearable app with the required attention level is the hardest thing to do. Because it is depending on the many factors form the end-user like:
The action the I need to do need to be aligned with already existing automated task. Then it is a routine and I can only multi-task of one task is a routine task, See Theo Compernolle. As Theo says: “People work beter if they can process information in blocks instead of multiple tasks at the same time. In other words people are more productive if they have focus”.
An example of this routine and attention task is: “please don’t talk to the bus driver”. The bus driver can’t multi-task giving attention to the road and talking. Both tasks are not routine. Driving is a routine, but watching the traffic is not. Using a wearable should be a routine like driving or watching the time. If it is not a routine then it is requesting to much attention.
So depending on the differt use cases people use a different device and the amount of time spent execute a task is different.
Depending on the type of wearable, to type of task the test criteria change. As a tester I like to use a model to structure all the wishes and needs, validate them and then give priority when I execute me tests. This model I call a the wearable test approach.
Wearable test approach
In my mobile app testing I use the ISLICEDUPFUN heuristic model of Johnatan Kohl. With this model I can perform a risk analysis and select the the relevant perspectives. A perspective like Network conditions test the mobile app with WiFi, mobile carriers, movement, buildings, impact of switching between different network types, speed, online/offline. See for more background information about this heuristic http://kohl.ca/b/ISLICEDUPFUN.pdf.
With type of use case, the size of the action, the attention level and if it is a routine or not I looked at the mobile and the big difference is: “Motion”. Like I’m riding on a bick and receive a phone call. How do I pick up? How do I answer? Im driving in a car and get the activity update to focus on my standing goal; how do I dismiss this without my hands? Extending the digital experience, monitoring vital state and control the environment changes the form of interaction, when to interact, how long to interact.
Conclusion: Wearable and accessibility
The user interaction starts with being able to interact with the wearable app. This is accessibility is usually seen as a property for people with disabilities, but when I was giving voice input and didn’t know how to switch between control the app and give input I felt disabled, when screen is so small and with my big fingers keep touching wrong I felt disabled, when I was running and want to change a podcast Im constant double tapping, swiping between screens.
Using a wearable starts with testing on accessibility. Wearables are new sensors and to be able to use them, people need to be able, without a friction, to access them. See http://appqualityalliance.org and Mobile Accessibility Handbook for more information about accessibility.
After this the testing can focus on perspectives like Ergonomic, Usability and User scenario’s.
Wearable examples references
Testing in perspectives references
That was the closing question on a blog from Parimala Hariprasad on her blog “Curious tester”
GOOB stands for Go Out Of The Building heuristic. This sound simple but every time I speak to other mobile app tester it is amassing how hard it is to do this. The excuses as: I don’t have access on other then internal WiFi, the devices are not allowed to leave the floor, I need my desk to report bugs, we don’t have SIM cards, I’m working in my cubicle. Parimala point out that the conditions that are important for the success of the app are not inside. Follow the user, meet them and learn where and how the app is used.
See the post on her blog: Mobile App Study using GOOB Heuristic
The subtitle of this video is the “Walking tester”. The video is making a comparison between the daily testers job and walking the trail, El Caminito del Rey in Spain. To me testing is all about showing to the stakeholders, what the road is, how ‘easy’ it is to use and trying to give the most relevant information as soon as possible. What is good about this video is that it makes you feel how the road looks like, in your stomach. How much risk are we willing to take? ‘Normal’ users don’t want safety guidewire’ when using the software.
The shot was taken by the German trekker Daniel Ahnen, who traversed the trail without clipping into a safety guidewire. Look at this video and enjoy the ride:
See more photo’s and video about the most 5 of the world’s most dangerous hiking trails, read more.
“Take your App testing from Mobile to Wearable”. This is the title of a presentation I’m going to give on a new conference of EuroSTAR, Mobile Deep Dive (EuroSTAR Deep Dive program).
Here is my promo
Meaning is determined by its context. This may sound very logical. If you are discussing face-to-face, the emotions give relative weight to the words. But not only the context of this moment determents what a word means, also the history of both the speaker and receiver. This is not only the case in ordinary life, but also when you are testing. An example where the context is not understood can make clear why every test should start with the question: “What do you mean?”
This weekend I had a beautiful example about context and meaning. My son is five years old and is very interested when grownups are talking. It looks likes he’s playing in his own world, but he hears everything. Everything he hears is place it in his 5-year-old-child context. This weekend we were talking about cleaning the house and closets for the new year (spring cleaning). My mother-in-law was telling how a big box has fell down on the floor and that almost all her bottles of “Eau de toilette” were broken. The way she said it triggered my son to stop playing, jump up from his chair and run to the bathroom. He comes back with a big smile on his face and says “no, granny. The toilet is working just fine, it is not broken”.
We looked at him and first had no idea, but then we understood his interpretation and it took a half hour of laughing and wiping our tears. We understood why he ran to the toilet. He heard oooh, the toilet it is broken, instead of my Eau de toilette is broken. It is only since a few weeks that he fully goes to the toilet, by himself, and doesn’t have little ‘accidents’ anymore. So the toilet is very important to him at this point in his life. He is growing up and becoming ‘a big boy’. But I think he also what to show that he is really listening to us, but also wants to understand what he is hearing.
His meaning and our context make a beautiful example of what we do as a tester every day. We hear this and give meaning to it, ‘our’ meaning I should say. Without the correct context it can become very funny.
If you are testing a mobile app you are probably not sitting behind a desk with all your standard reporting tools and you don’t have all the test cases writing down. So how can you collect your mobile app test results?
My goal when testing is to give insight into where the defects are and make that as clear to all the stakeholders. By giving this information the possible risks can be evaluated. When testing I can’t explain all the steps I take on a mobile device? I could of course prepare all my test case during a test analysis period and during execution refer to these steps. I try to add print screens and movies to the test results to help a developer finding solutions for the found defects.
Less then ideal
The first problem I have is that the design and the software are developed at the same time (Agile/scrum team), so there is no separate preparation period. The second problem is that I usually can’t prepare every test case in detail because the number of situations and the relation between all the variables are far to complex to grasp in simple steps. How can I verify a defect on one device to all the other devices. Beside the platform and manufacturer differences there are many more factors that determine if there is a defect and/or the defect to manifest. Think about type of network and speed, location, movement, number of apps running, amount of free memory, who is executing the test, his or her emotional state. And there are many more.
One versus a group
If you are testing an app or part of the app on a group then the results need to be clustered and generalized. This is more a process a and quality item then a tooling or statistical issue. I would organize sessions with the group, slice the app and prepare tours with different perspectives. Look for more info about tours at Cem Kaner article. A tours focus on the tasks a user can perform with the app. Use a technique like equivalence class to spread the setup over the tours.
After a product risk analysis the chosen priority and area’s become clear. These results is translated by me into a setup. This setup is used during the (preparation) of the test cases/tours. An example of a setup is shown below. Every column in the table contains the chosen variation.
|iPhone 5s – 8.0.2||Product owner||offline||Binnen – zitten||App in its life time||User Scenario’s||iOS||1|
|iPhone 5s – 8.1||Interne tester||wifi||Binnen – lopen||Change your mind||Interruptions and interactions||Android||2|
|iPhone 6 – 8.1||Persona – Daan||3G||Buiten – lopen||Comparison||Function||Cross platform||3|
|iPhone 6p – 8.1||Persona – Lieke||4G||Buiten – rijden||Connectivity||Store submission||4|
|HTC One M7 – 4.4.3||Externe gebruiker||Multiple||Feature||Data||5|
|Nexus 5 – 5.0.1||FedEx or CRUD Tour||Network conditions||6|
|Sony Xperia Arc HD – 4.1.2||Guidebook or Competitor tour||Communication|
A developer needs all the context you can report to create a solution. Here is a list of the tools that I use to record the context of the defect:
– Product risk analysis
– A test setup
– (logical) Test cases on high level
– Multiple devices with SIM cards and a data bundle
– Multiple persons that execute tests
– Test data in all sort and flavors that can be created or searched for on the spot
– software to report, record and analyse like Xmind for test ideas and coverage, Mobizen and Reflector for mirroring, Shou screen recording, Postman plugin for API analysis, ADB and iTools for log files, Jira for defect reporting, Hip chat for informal communication.
Before I report a defect I need a reference situation to compare my situation to. Usually this is the functional perspective. From this point you start varying using the test setup, experience, hints for the team and your feeling.
Reporting the test results is more than using an application making movies. I like the combination of high level preparation and exploratory testing. I’m in direct contact with the developers and can create my own test data. I have a fysical test lab, test in side and out side. I use multiple devices and my laptop so I can repeat the scenario, analyse and compare the results.
For note taking I’m still looking into iTester app but this is only available on iOS. More info about note taking and exploratory testing:
It’s not only handy to mirror the screen of your smartphone to a laptop or projector for demo but also needed for all different kind of mobile app testing like user test, reporting a defect or working with a distributed team. Streaming is mirroring the screen to another screen in real time, without any delay and preferred wireless. Handy to know is that mirroring is also called streaming, casting or sharing.
The most problems I have is with mirroring an Android device. There is not one solution for all Android devices. But the options that work best for me I will describe below. If you are looking for a solution for your self look at the list with choices below:
If you want a wireless connection the The device should be in the same WiFi as the laptop. More precise on the same access point. Otherwise the client (read smartphone) cannot find the server (read laptop). Also for wireless connection the port numbers should be open in the firewall and two way connection should be aloud.
If you want a wired connection then use a connection cable for iOS from Apple (lightning to VGA and HDMI) and a connection cable from (Micro usb naar VGA/HDMI kabels like http://www.slimportconnect.com/). Also make sure you have a projector with a HDMI port.
The preferred option for me is to use a TV or monitor with a tuner, and a Google chrome cast dongle. With this setup the Nexus smartphone mirroring is wireless, sharp and real-time. If the Chromecast dongle isn’t working with the local WiFi then I would choose a cable between the smartphone and the HDMI cable to the projector.
If you look further the the following wireless setups are also working, but with a small presentation delay:
There is a new solution from airserver with Miracast solution for Windows. This solution makes Windows laptop a receiver for any Mirocast supported device. I’m not able jet to get my Windows laptop configured correct. More info about airserver and Miracast, see http://www.airserver.com/.
For iOS the solution is not so complex as with Android. The best option is Airplay. This is build into the platform for mirroring an iPhone to an external display like an Apply TV, a Mac laptop. Airplay can be used form iPhone 4 (http://en.wikipedia.org/wiki/List_of_iOS_devices). The iPhone is the sender (client) and the TV or laptop is the receiver (server). On the laptop an air server should be running (http://www.airserver.com/).
Problem and an alternative
Most of the time the problem is that there is no wireless connection between the client (device) and the server (laptop). This can be because the needed port is not open or supported protocol is not supported. Another problem is that in many offices the total wireless network exits of many access points. So its possible that the client is connect to a different access point than the server.
So if the wireless access point is like Fort Nocks the alternative is the setup a personal hotspot with your smartphone and connect with your laptop. This is a costly option and your phone constant needs power and gets very hot.
The main message is there it’s not one solution for all devices, not even on a platform. Also try every option in the room your self. To see if the configuration works, the speed is okay and all the scenarios in the mobile app work. The advise is always have a back-up plan if a connection is not working at moment supreme.