Analysis of factors related to the satisfaction of using the Android, iOS and Windows Phone operating systems

Marcos Antonio Alves

marcosalves@ufmg.br

Graduate Program in Electrical Engineering, Federal University of Minas Gerais, Belo Horizonte, MG, Brazil


ABSTRACT

The objective of this research was to analyze the factors related to the satisfaction of using smartphones with the Android, iOS and Windows Phone operating systems. The target audience was undergraduates and graduates in courses in the areas of Engineering, Computing and Administration. The database was created through the application of an online questionnaire, based on the Questionnaire for User Interaction Satisfaction - QUIS version 7.0. The interviewees expressed their comfort and acceptability of such systems through grades between 1 and 9 points. The evaluated items were: screen, terminology, learning and system capabilities. A multivariate analysis of the data was applied to obtain the results. The factor analysis and dimensionality reduction by principal component matrix pointed out four new factors that explain 69.2% of the variance of the 21 features initially studied. Issues related to system error messages were the variables with the highest correlation (0.884 and 0.889). The results of this research serve as a guide to new evaluations of operating systems and support for analysis and improvement of usage satisfaction over mobile applications. As recommendations, it is suggested to include new groups of respondents, understanding that factors may interfere or not in the final satisfaction of the users of these platforms.

Keywords: Usage satisfaction; Smartphones; Questionnaire for User Interaction Satisfaction.


INTRODUCTION

According to the National Telecommunications Agency (Anatel, 2016), the number of active cell lines in Brazil surpassed the 258.1 million mark in February 2016. Even with a decrease in sales (Anatel, 2016), teledensity in the same period presented an average of 125.6 appliances per 100 inhabitants.

A few years ago, mobile devices were used only for making calls and sending text messages. Today, they have become so powerful and complete that they approach personal computers.

In order to conquer and retain the users of these smartphones, the companies that develop the operating systems seek to progressively improve technologies for the construction of such devices, as pointed out by Wasserman (2010) and Choi et Lee (2012). Increasingly complex applications were developed in this evolutionary process, while the number of mobile devices grew exponentially.

Several brands and models of smartphones are available in the market. Thus, robust operating systems seek to meet the overall expectations of its users and attract new ones.

In this context, research has been developed with the purpose of improvements to the current software in search of use satisfaction of its consumers (Parada et al., 2015). Among these surveys, the satisfaction is one of the most investigated, as stated by Kronbauer et Santos (2013).

In addition, many studies related to the development, evaluation and/or validation of mobile applications were reported in the literature (Wasserman, 2010; Treeratanapon, 2012; Gresse Von Wangenheim et al., 2014; Moumane et al., 2016) or to accept them (Silva et Dias, 2007, França et al., 2016). However, there is a lack of studies that assess users' confidence about the operating systems themselves, freeing up app evaluations.

One of the ways to measure users' expectations is through the application of questionnaires. This approach is used to obtain data or information about characteristics, actions or opinions of a target population (Freitas et al., 2000, Moumane et al., 2016) so that the researcher can obtain quantitative descriptions of a pre-defined instrument used. In addition, according to Kronbauer et Santos (2013), questionnaires have been widely used in articles aimed at investigating the use of mobile applications.

The Questionnaire for User Interface Satisfaction - QUIS, the basis of this work, is commonly used to measure subjective user satisfaction regarding the usability of an interface (Chin et al., 1988). Since its conception and validation, many studies have been developed using or drawing on this tool. This includes app evaluations for mobile devices from different operating systems (Hussain et Kutar, 2012; Gresse Von Wangenheim et al., 2014; Naeini et Mostowfi, 2015; Moumane et al., 2016).

In order to keep the representative target population, the participants were judged to be more similar among themselves (Freitas et al., 2000). The group of interviewees was composed of people graduating or graduated in the courses of Engineering, Computing or Administration. It was believed that generalizing the search to a very diverse and heterogeneous audience, while also representing mobile users, could make the answers non-faithful to the research, since different groups may have different expectations and this affects the use satisfaction. An example was highlighted by Kronbauer et Santos (2013), who pointed out that the number of errors in the use of a given application is differentiated depending on the socioeconomic condition.

This work aims to study the comfort and acceptability criteria of smartphone users with the Android, iOS and Windows Phone operating systems. We opted for the choice of these operating systems because they are the most widely used and are currently known in the market (IDC, 2015).

This research becomes relevant for several aspects: filling the gap regarding the evaluation of different and important operating systems of mobile devices; being based on a valid and recognized questionnaire for evaluation of interfaces; and bringing contributions both in terms of management and the scientific aspect, since it is possible to understand the differences between the systems and the points that most contrast these divergences. In addition, it indicates to operating system developers which paths need more attention, points out approaches that can guide students in human-computer interaction (HCI) and devices. With these results, one can indicate for the development of improvements to the systems or the relation that the constructed applications will have with such devices.

The number of smartphones in Brazil exceeds the number of inhabitants. This does not take into account that of the more than 206 million inhabitants (IBGE, 2016), not all have a mobile device. Thus, if the number of 125.6 cell phones per 100 inhabitants (Anatel, 2016) represented only the group that already owns handsets, this figure would be even higher. There are two opportunities for the mobile telephony sector: loyalty to current customers and the chance to conquer new ones.

Growth in smartphone sales is slowing. Research conducted by EMarketer (2016) pointed out that market dynamics will focus on replacing these current mobile devices with devices with more technological resources.

Study conducted by Gartner (2012) indicated that the Android and iOS operating systems appear at the top of market presence. These systems were present in more than 80% of the handsets sold in the world, with Android holding more than 60% of the total. Both ranked first and second respectively in market positions. Windows Mobile came in fifth, preceded by Symbian, Research in Motion and Bada. However, the first and last were discontinued by its developers, Nokia and Samsung. Research in Motion continues in the market as BlackBerry.

Research conducted by IDC (2015) revealed that Android, iOS and Windows Phone keep their hegemony in the market with 82.8, 13.9 and 2.6% of market share, respectively. It is evident at this point that to continue this supremacy, the maintenance on the se platforms becomes necessary. Therefore, studies focusing on the development of the systems and the audience for which it is intended are indispensable.

It is worth inferring, at this point, that Android is a system for mobile devices developed by Google and based on Linux. Although this system can be used by more than one device manufacturer, the Samsung Company holds the largest part and aims to remain in the lead with a focus on low cost smartphones (IDC, 2015; Gartner, 2012). IOS, in turn, is developed and distributed by Apple Inc., and it is dedicated and homologated exclusively for products of this brand. As for the Windows Phone, it is developed by Microsoft and in partnership with Nokia has become the main operating system of the devices of this brand (IDC, 2015).

The different formats of interaction between user and operating system allied to different use situations make this evaluation essential and peculiar. In this scenario, Usability Engineering gains noticeable prominence. It describes verifiable and measurable usability criteria, as well as specifying quantifiable parameters about the performance of a product in relation to the measures adopted, such as usage satisfaction that includes the frequency of user complaints and expressions (Nielsen, 1994; Abreu, 2005).

There are several research opportunities from this area of knowledge. Table 1 presents a survey with works related to the evaluation of interfaces and/or applications and the evaluation methods used. Such research had as its main objective to measure the satisfaction of use in mobile devices or elaboration of approaches for their evaluation.

Table 1. Researches that investigate mobile devices and evaluation methods

Figure

Source: Authors

It is possible to notice that the application of questionnaires is a commonly used technique in the literature. This approach allows us to measure the users' expectation (Freitas et al., 2000; Padilha, 2004; Moumane et al., 2016). Padilha (2004) and Kronbauer et Santos (2013) indicated that questionnaires have been widely used in research that investigate the use of mobile applications, being able to evaluate a qualitative and quantitative interface. For Freitas et al. (2000), Abreu (2005) and Naeini et Mostowfi (2015), the questionnaires are useful to collect subjective information, interface quality and data about users' profile and possible problems.

The survey pointed to studies that used the QUIS as users' subjective evaluation tool. This questionnaire is a reliable and appropriate tool for this purpose. Updates were made to this questionnaire to adapt it to the new market needs and to keep it updated (Harper et al., 1997; Naeini et Mostowfi, 2015; Moumane et al., 2016).

Table 1 indicated a tendency for evaluative studies of applications in mobile devices and proposals for approaches to evaluations of mobile applications. However, there is a lack of studies on the satisfaction of using the operating systems themselves, a fundamental basis of application support.

RESEARCH METHODOLOGY

Type of research

This research can be characterized as descriptive, of quantitative approach (Freitas et al., 2000; Padilha, 2004, Cervo, 2007; Corrar, 2007). Its applied research nature is the generation of applied knowledge focused on specific problems. The descriptive objective focuses on investigating the factors that relate to satisfaction of use of smartphone operating systems and to analyze the opinions of the users obtained from the applied questionnaire based on the QUIS. The approach allows quantifying and analyzing the level of satisfaction of the interviewees, transforming a subjective measure into a numerical one.

Method of data collection

The first step was to customize the questionnaire, based on QUIS version 7.0, so that the questions of interest were focused on the operating systems of smartphones. The QUIS was developed at the University of Maryland's Laboratory for Automation Psychology and Decision Processes (LAPDP). It is fundamentally used to measure subjective user satisfaction regarding the usability of an interface. In addition, this questionnaire presents known and quantifiable estimates of reliability and validity (Chin et al., 1988; Harper et al., 1997; Naeini et Mostowfi, 2015; Moumane et al., 2016).

The questionnaire that was sent to the candidates was organized by sections. From the original, demographic questions and questions about the operating system used by the candidates were included. It was not scope of this work to formulate a new questionnaire or to make significant modifications in the reference questionnaire.

The questionnaire applied in this paper can be subjectively divided into two parts: general satisfaction with the system and specific interface factors. The first part refers to the general context of the interviewees, their experiences with the system, experiences with similar devices and general user reactions. This part refers to sections 1, 2 and 3 of Table 2, respectively. The second was focused on the evaluation of the operating system in the questions: screen, terminology, learning and system capabilities. These are sections 4, 5, 6 and 7, respectively, of Table 2. A question from section 12 of the questionnaire was added in section 7 of the questionnaire applied, with the objective of evaluating satisfaction with the installation of new software within the platforms studied.

Table 2. QUIS factors considered in the questionnaire

Figure

Source: Authors

Respondents expressed their opinions by specifying their level of satisfaction with each of the evaluated criteria. For this, the tool was based on a 9-point Likert scale, ranging from 1 to 9, in each question of these sections. All points were labeled with this numbering. The closer to grade 1, the more dissatisfied the interviewee was. And the closer to 9, the more satisfied. In addition, such as the QUIS, it was sought to maintain neutrality in the questions to avoid acquiescent bias, presented by Presser et Schuman (1981).

The second stage was the application of the questionnaire to a group of people and subsequent analysis of the results. The application of questionnaires can be considered as a prospective technique (Padilha, 2004), which involves the opinion of users and serves to evaluate the interaction between them and the interface (Freitas et al., 2000; Naeini et Mostowfi, 2015). According to Freitas et al. (2000) and Padilha (2004), the questionnaire has a great advantage as it is an instrument capable of being applied to a large number of users at the same time.

In order to collect the data and to avoid bias in the answers, it focused on a group formed by candidates graduating or graduated in courses in the areas of Engineering, Computing or Administration. The higher courses that comprise these formations are linked to technology and management.

The goal was to search for answers only on the Android, iOS and Windows Phone operating systems. For the purpose of analysis, those who responded to the survey and, in some of the criteria, showed themselves outside this control group had their answers excluded. Candidates who had very old versions of the operating systems also had their notes disregarded. The systems considered were: Android (version 4.0 or higher), iOS (7 or higher) or Windows Phone (7 or higher).

The data collected were coded and loaded into the statistical software Statistical Package for the Social Sciences (SPSS Statistics) version 24.0 Trial for Windows 10. Subsequently, the data were processed and the descriptive and factor analysis were performed. The first part consisted of understanding the profile of the interviewee and whether the data obtained were in agreement with the proposal. In addition, it was possible to capture and measure the relationship between users and systems and other demographic data. The second part consisted of factor analysis of the data. At this stage, the multivariate analysis technique was used to examine the relationships between the features studied and the operating systems as a whole. The objective was to treat the data and seek a form of reduction transformed so that the new variables explained the data well and made the model more parsimonious (Corrar, 2007; James et al., 2013).

For this study, a 95% confidence interval was adopted, implying a significance level of α = 0.05 in order to extract the maximum information from the data and to analyze the behavior of the data in order to evaluate the influence, or not, of key factors and their interactions on the value of the observed response variable.

RESULTS AND DISCUSSION

Reliability of the applied questionnaire

The questionnaire used in this study was adapted from QUIS. This is a reliable tool validated by Chin et al. (1988) and version 7.0 by Harper et al. (1997). Due to the changes in the questionnaire applied, it was considered important to measure its reliability as well. For this, the Cronbach's alpha coefficient was used. This parameter is commonly used to measure the internal consistency of the questionnaires, especially when the questionnaire or survey questions use the Likert scale. Equation 1 is used to calculate this coefficient. The expected result will be a value between 0 and 1. A score equal to or greater than 0.7 is considered acceptable (Park et Chen, 2007, Choi et Lee, 2012, Naeini et Mostowfi, 2015, França et al., 2016). The analysis provided an overall Cronbach's alpha value of 0.9448 (Table 3), confirming the reliability of the data obtained in this research.

Figure

Table 3. Reliability statistics

Figure

Source: Authors

Demographic factors and overall system satisfaction

The questionnaire was applied between October 2014 and December 2015. 306 valid answers were obtained. The answers whose respondents had an operating system and/or a different version of the systems surveyed were disregarded, as were the respondents who were not part of the studied group.

Most of the answers came from the male audience, as shown in Table 4. The age ranged from 18 to 73 years of age, with mean and mode of 30 years. A total of 14% of courses were answered in the Administration area, 45% in Computing and 41% in Engineering.

Android usage was the majority in the group of respondents, with 70%, against 21% of iOS and 9% of Windows Phone (WP). These results reflect the research of Gartner (2012) and IDC (2015), whose research pointed to the predominance of these systems in the market. In addition, the authors had pointed out that the former had a market share much higher than the others.

Table 4. Description of users interviewed

Figure

Source: Authors

About the experience with the system, 14% stated that they used the system less than 6 months ago; 18% between 6 months and one year; and 68% have used the system for over one year. Regarding the average daily usage time, 9% use the smartphone less than one hour per day; 36% use between 1 and 4 hours; and the others, 55%, reported spending more than 4 hours. The latter does not necessarily mean uninterrupted usage time. In contrast, we can see an opportunity to study the frequency of use of mobile devices, interaction with applications and, perhaps, ergonomics issues.

All participants reported having some familiarity with other devices, software or computer systems. These data are satisfactory for the research, since the research group has experience with this type of interaction.

As for the overall reactions, all questions averaged over 5 points. For the Terrible-Wonderful item, 86% of the answers contained scores between 6 and 9, with mode equal to 7, accounting for 42% of this total and 36% of all responses. For Frustrating-Satisfying, 80% were between 6 and 9, mode equal to 8, representing 34% of this total and 27% of the overall total. For the Difficult-Easy question, 86% scored between 6 and 9, mode equal to 8 representing 36% of this group and 31% of the overall total. The fact that 86% consider the system easy to use may be related to the use of other interfaces. On the other hand, the evaluation of the Dull-Stimulating item, 72% of the responses were between 6 and 9, with mode 7 representing 36% of these responses and 26% of all responses.

Factor analysis between groups of systems

The Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy and Bartlett's Test of Sphericity were used to verify whether the original data were satisfactory for factor analysis. The results obtained for KMO higher than 0.5 and p-value lower than 0.05 in the Bartlett sphericity test are favorable to the use of factor analysis in this research (Corrar, 2007; França et al., 2016). These results are presented in Table 5.

Table 5. KMO and Bartlett tests

Figure

Source: Authors

The analysis of the measures of sampling adequacy (MSA) was performed to verify the explanatory power of each factor in each of the model variables. Each variable is inserted in the anti-image matrix and values above 0.5 represent good factors (Corrar, 2007). The measures of sampling adequacy obtained in this study were values between 0.819 and 0.962.

Hereafter, the commonality analysis of each variable was calculated. It represents how much each extraction can be explained along the factor analysis (Hair Jr. et al., 2005; Corrar, 2007). The values whose explanatory power is greater than 0.5 are considered valid. The closer to 1.0, the greater the power of extracting information that the variable has of the model; thus, the greater the percentage of explanation of the variable in the factor analysis. In this work, TER521 reached the highest coefficient (0.883) and REC711 the lowest (0.544). As all values were greater than 0.5, no variables were excluded from the model.

The Principal Component Matrix (PCA) was used as a technique for the extraction of the factors and dimensionality reduction of the model. The criterion considered was eigenvalue. This criterion is used to determine the number of factors that explain the greater variability of the model. Figure 1 shows the tradeoff between the eigenvalue and the number of possible major components to be selected. We chose to use the four components that most explain the model.

According to James et al. (2013), each principal component (PC) is represented in the direction where the data have the highest variance. The first line should minimize the perpendicular distances (projections) between each observation and the line itself. The second must capture all the information that the first PC has not captured, and so on. In the end, there is the tradeoff between the number of the PC to be used in the model and the total variance explained.

Figure 1. Principal Component Matrix

Figure

Source: Authors

The dimensionality reduction allowed reducing from 21 to 4 factors. These four factors account for 69.19% of the variability of the model (Table 6). It is noted through the table that only one factor is responsible for explaining 50% of the model. This index, as explained by James et al. (2013), suggests a linearity of data that is represented by this component.

Table 6. Total variance explained

Figure

Source: Authors

Then, through the rotated component matrix by the Varimax method with Kaiser Normalization, it was possible to analyze the indicators within each of the factors extracted by the PCA. Factor 1 was composed of the following variables: APR61, APR611, APR612, APR62, APR621, APR622, APR63, APR64 and APR641. The Factor 2 by: TEL41, TEL42, TEL421, TEL422 and TEL511. Factor 3 for variables: REC71, REC711, REC73 and REC74. A variable, TER51, was excluded from the model because it reached a coefficient lower than 0.5. Factor 4 collected variables TER52 and TER521. This relationship of the factors and variables of the new model are presented in Table 7.

Table 7. Component Matrix

Figure

Source: Authors

Through the component analysis, it is noticed that Factor 1 was composed by the variables related to learning. These variables reached a coefficient between 0.655 and 0.778. This section focuses on exploring new features and how challenging it is for users. Factor 2 was composed of the variables related to the screen and a terminology variable, TER511. In this case, this variable had a higher correlation with the screen section than terminology. The coefficients reached values between 0.523 and 0.766. This new generated factor aims to measure the ordering and layout of the screens that the operating systems offer to the users. Factor 3 was composed of system resource variables. The coefficients reached between 0.617 and 0.780. This factor is related to the solutions that the operating systems offer to users during the use. Factor 4 brought together two strongly correlated variables, TER52 and TER521, both terminology, with coefficients of 0.884 and 0.889, respectively. These variables relate to error messages. The high correlation between the variables indicates that users' experience with these two characteristics is very similar.

With the four new factors obtained by the dimensionality reduction, we tried to validate again the internal consistency of the data of each factor. For this, the Cronbach alpha coefficient presented in subsection 5.1 was used. The values obtained were: Factor 1: 0.9411, Factor 2: 0.8840, Factor 3: 0.7851 and Factor 4: 0.9124. The result indicates the reliability of the new factors found after the data treatment.

The factor analysis of this work allowed us to understand which criteria have a greater influence on the satisfaction of using mobile devices. The dimensionality reduction of the data using PCA pointed out four new factors that explain much of the data obtained, making the model more parsimonious. With four dimensions, the new model becomes less complex to be analyzed.

CONCLUSION

This research focused on the evaluation and identification of factors that positively or negatively affect the satisfaction of using mobile devices. The objects of study were the Android, iOS and Windows Phone operating systems. These systems were chosen because they are the best known and used in the market, according to IDC research (2015). The target public interviewed was undergraduates and graduates in courses in the areas of Engineering, Computing and Administration. This public was chosen because it has candidates with similar reasoning and abilities. The tool used was a questionnaire based on QUIS. The questionnaire applied proved to be reliable and allowed evaluations regarding the factors of use of the operating systems.

The questionnaire was applied online and a total of 306 valid answers were obtained. Of these, 69% came from users of the Android system, 21% from iOS and 10% from Windows Phone. These data indicate agreement with the research of IDC (2015) and Gartner (2012) that pointed to the hegemony of these systems in the market, being Android with the largest market share, higher than the sum of the two others.

The initial model had 21 features. These were grouped among the screen categories, terminology, learning and system features. The fact of analyzing many factors and more of an operating system indicated the need for multivariate analysis of data. At this point, the factor analysis and the search for dimensionality reduction of the model were tests used.

The principal component matrix allowed reducing from 21 to 4 factors. These new factors account for most of the data variance and have satisfactory coefficients. Factor 1 was composed of the variables related to learning with the operating system. Factor 2 brought together the variables related to the screen and a terminology variable, because it has greater power of association. Factor 3 was composed of the questions related to the resources that the operating system offers to users. Factor 4 collected two questions about error messages that showed a strong correlation between them. This point indicates that, in addition to the strong association between variables, these issues should be a point of attention. This is justified by the fact that only two variables correspond to a new factor.

The study was limited to investigating the satisfaction of using mobile device operating systems for a specific group while the software is present in almost all of the devices used. The main challenges of this research were: to reach a significant number of respondents who could participate in the research, to narrow the questionnaire in such a way that it focused on the evaluation of the operating system, rather than to induce the candidates about the use of applications.

The results of this work open up possibilities for further research. It is hoped that the analysis developed will support evaluations of interfaces and mobile applications. In addition, it serves as a guide for developers of operating systems and academic community interested in IHC. It is possible to extend this research to new groups of interviewees and to analyze possible differences between them. New research suggestions become interesting from this work: proposals for effective error messages within the universe of smartphones and evaluations of new operating systems are some of them. In addition, it is possible to expand the control group and confront factors that may or may not interfere with the satisfaction of users of these platforms.


REFERENCES

Abreu, L. M. (2005), Usabilidade de telefones celulares com base em critérios ergonômicos, Dissertação de Mestrado em Design, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, RJ.

Agência Brasileira de Telecomunicações - Anatel (2016), “Estatísticas de celulares no Brasil”, disponível em http://www.teleco.com.br/ncel.asp (acesso em 02 abr. 2016).

Cervo, A. L. (2007), Metodologia científica, 6 ed., Pearson Prentice Hall, São Paulo.

Chin, J. P. et al. (1988), “Development of an instrument measuring user satisfaction of the human-computer interface”, artigo apresentado em CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Portland, OR, 26-28 set. 1988, disponível em: http://dl.acm.org/citation.cfm?id=57203 (acesso em 15 jan. 2016).

Choi, J. H.; Hye-Jin, L. (2012), “Facets of simplicity for the smartphone interface: A structural model”, International Journal of Human-Computer Studies, Vol.70, No.2, disponível em: http://www.sciencedirect.com/science/article/pii/S1071581911001261 (acesso em 15 jan. 2016).

Corrar, L. J. et al. (2007), Análise multivariada para os cursos de administração, ciências contábeis e economia, 1 ed., Atlas, São Paulo.

EMarketer (2015), “Nearly 400 Million in Latin America Used Mobile Phones in 2014”, disponível em http://www.emarketer.com/Article/Nearly-400-Million-Latin-America-Used-Mobile-Phones-2014/1011818 (acesso em 02 mar. 2016).

França, V. et al. (2016), “Fatores favoráveis à aceitação de aplicativos móveis: um estudo com Alunos de uma instituição pública de ensino”, Sistemas & Gestão, Vol. 11, No. 1, disponível em: http://www.revistasg.uff.br/index.php/sg/article/view/1045 (acesso em 13 jul. 2016).

Freitas, H. et al. (2000), “O método de pesquisa survey”, Revista de Administração da USP – RAUSP, Vol.35, No.3, pp. 105-12.

Gartner (2012), “Gartner Says Worldwide Sales of Mobile Phones Declined 2.3 Percent in Second Quarter of 2012”, disponível em http://www.gartner.com/newsroom/id/2120015 (acesso em 20 jan. 2018).

Gresse Von Wangenheim et al. (2014), “Sure: uma proposta de questionário e escala para avaliar a usabilidade de aplicações para smartphones pós-teste de usabilidade”, artigo apresentado em ISA 14: 6ta. Conferencia Latinoamericana de Diseño de Interacción, Buenos Aires, BA, 19 a 22 nov. 2014, disponível em: bibliotecadigital.uca.edu.ar/repositorio/ponencias/sure-proposta-questionario-escala.pdf (acesso em 13 jul. 2016).

Harper, B. et al. (1997), “Questionnaire administration via the WWW: A validation and reliability study for a user satisfaction questionnaire”, artigo apresentado em WebNet 97, Association for the Advancement of Computing in Education, Toronto, Canadá, 31 out. – 05 nov. 1997, disponível em: http://www.lap.umd.edu/quis/ (acesso em 13 jul. 2016).

Hussain, A.; Kutar, M. (2012), “Usability evaluation of SatNav application on mobile phone using mGQM”, International Journal of Computer Information Systems and Industrial Management Applications, Vol.4, No. 2012, disponível em: https://doaj.org/article/37263dce87964e90b4d261e1da14cc81 (acesso em 13 jul. 2016).

IDC (2015), “Smartphone os market share, 2015 q2”, disponível em http://www.idc.com/prodserv/smartphone-os-market-share.jsp (acesso em 03 mar. 2016).

Instituto Brasileiro de Geografia e Estatística – IBGE (2016), “Projeção da população do Brasil e das Unidades da Federação”, disponível em http://www.ibge.gov.br/apps/populacao/projecao/ (acesso em 03 mar. 2016).

James, G. et al. (2013), An introduction to statistical learning with applications, in R, Springer, Nova Iorque, NY, disponível em: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.pdf (acesso em 03 de março de 2016).

Kronbauer, A. H.; Santos, C. A. S. (2013), “Avaliação da Influência de Aspectos Contextuais na Interação com Aplicativos para Smartphones”, artigo apresentado em WebMedia '13: 19th Symposium on Multimedia and the Web, Salvador, BA, 05-08 nov. 2013, disponível em: http://www.academia.edu/11302187/Evaluation_of_the_influence_of_contextual_factors_on_the_interactions_with_applications_for_smartphones (acesso em 06 jul. 2016).

Moumane, K. et al. (2016), “Usability evaluation of mobile applications using ISO 9241 and ISO 25062 standards”, SpringerPlus, Vol.5, No.1, disponível em: https://www.researchgate.net/publication/301720411_Usability_evaluation_of_mobile_applications_using_ISO_9241_and_ISO_25062_standards (acesso em 28 jul. 2016).

Naeini, H. S.; Mostowfi S. (2015), “Using QUIS as a Measurement Tool for User Satisfaction Evaluation (Case Study: Vending Machine)”, International Journal of Information Science, Vol.5, No.1, disponível em: http://article.sapub.org/10.5923.j.ijis.20150501.03.html (acesso em 28 jan. 2016).

Nielsen, J. (1994), Usability Engineering, 1 ed., Elsevier, San Francisco, California.

Padilha, A. V. (2004), Usabilidade na web: uma proposta de questionário para avaliação do grau de satisfação de usuários do comércio eletrônico, Dissertação de Mestrado em Ciência da Computação, Universidade Federal de Santa Catarina, Santa Catarina, SC.

Parada, A. G. et al. (2015), “Automating mobile application development: UML-based code generation for Android and Windows Phone”, Revista de Informática Teórica e Aplicada, Vol. 22, No. 2, disponível em: http://seer.ufrgs.br/index.php/rita/article/view/RITA-VOL22-NR2-3150 (acesso em 28 jan. 2016).

Park, Y.; Chen, J. V. (2007), “Acceptance and adoption of the innovative use of smartphone”, Industrial Management & Data Systems, Vol.107, No.9, disponível em: http://www.emeraldinsight.com/doi/abs/10.1108/02635570710834009 (acesso em 30 jan. 2016).

Presser, S.; Schuman, H. (1981), Questions and answers in attitude surveys: Experiments on question form, wording, and context, Sage, San Diego, California.

Silva, P. M.; Dias, G. A. (2007), “Teorias sobre Aceitação de Tecnologia: por que os usuários aceitam ou rejeitam as tecnologias de informação”, Brazilian Journal of Information Science, Vol.1, No.2, disponível em: www2.marilia.unesp.br/revistas/index.php/bjis/article/download/35/34 (acesso em 13 jul. 2016).

Treeratanapon, T. (2012), “Design of the Usability Measurement Framework for Mobile Applications”, artigo apresentado em ICCIT'2012: International Conference on Computer and Information Technology, Bangkok, Tailândia, 16 - 17 jun. 2012, disponível em: psrcentre.org/images/extraimages/19%20612045.pdf (acesso em 13 jul. 2016).

Wasserman, A. I. (2010), “Software engineering issues for mobile application development”, artigo apresentado em FoSER '10 Proceedings of the FSE/SDP workshop on Future of software engineering research, Santa Fé, Novo México, 07-11 nov. 2010, disponível em: http://dl.acm.org/citation.cfm?id=1882362&picked=prox (acesso em 13 jul. 2016).


Received: Mar 03, 2017

Aprovado: Jan 18, 2018

DOI: 10.20985/1980-5160.2018.v13n1.1269

How to cite: Alves, M. A. (2018), “Analysis of factors related to the satisfaction of using the Android, iOS and Windows Phone operating systems”, Sistemas & Gestão, Vol. 13, No. 1, pp. 97-106, available from: http://www.revistasg.uff.br/index.php/sg/article/view/1269 (access day month year).