Purpose Visual analytics is increasingly becoming a prominent technology for organizations seeking to gain knowledge and actionable insights from heterogeneous and big data to support decision-making. Whilst a broad range of visual analytics platforms exists, limited research has been conducted to explore the specific factors that influence their adoption in organizations. The purpose of this paper is to develop a framework for visual analytics adoption that synthesizes the factors related to the specific nature and characteristics of visual analytics technology. Design/methodology/approach This study applies a directed content analysis approach to online evaluation reviews of visual analytics platforms to identify the salient determinants of visual analytics adoption in organizations from the standpoint of practitioners. The online reviews were gathered from Gartner.com, and included a sample of 1,320 reviews for six widely adopted visual analytics platforms. Findings Based on the content analysis of online reviews, 34 factors emerged as key predictors of visual analytics adoption in organizations. These factors were synthesized into a conceptual framework of visual analytics adoption based on the diffusion of innovations theory and technology–organization–environment framework. The findings of this study demonstrated that the decision to adopt visual analytics technologies is not merely based on the technological factors. Various organizational and environmental factors have also significant influences on visual analytics adoption in organizations. Research limitations/implications This study extends the previous work on technology adoption by developing an adoption framework that is aligned with the specific nature and characteristics of visual analytics technology and the factors involved to increase the utilization and business value of visual analytics in organizations. Practical implications This study highlights several factors that organizations should consider to facilitate the broad adoption of visual analytics technologies among IT and business professionals. Originality/value This study is among the first to use the online evaluation reviews to systematically explore the main factors involved in the acceptance and adoption of visual analytics technologies in organizations. Thus, it has potential to provide theoretical foundations for further research in this important and emerging field. The development of an integrative model synthesizing the salient determinants of visual analytics adoption in enterprises should ultimately allow both information systems researchers and practitioners to better understand how and why users form perceptions to accept and engage in the adoption of visual analytics tools and applications.
The Internet of Things (IoT) has the potential to revolutionize agriculture by providing real-time data on crop and livestock conditions. This study aims to evaluate the performance scalability of wireless sensor networks (WSNs) in agriculture, specifically in two scenarios: monitoring olive tree farms and stables for horse training. The study proposes a new classification approach of IoT in agriculture based on several factors and introduces performance assessment metrics for stationary and mobile scenarios in 6LowPAN networks. The study utilizes COOJA, a realistic WSN simulator, to model and simulate the performance of the 6LowPAN and Routing protocol for low-power and lossy networks (RPL) in the two farming scenarios. The simulation settings for both fixed and mobile nodes are shared, with the main difference being node mobility. The study characterizes different aspects of the performance requirements in the two farming scenarios by comparing the average power consumption, radio duty cycle, and sensor network graph connectivity degrees. A new approach is proposed to model and simulate moving animals within the COOJA simulator, adopting the random waypoint model (RWP) to represent horse movements. The results show the advantages of using the RPL protocol for routing in mobile and fixed sensor networks, which supports dynamic topologies and improves the overall network performance. The proposed framework is experimentally validated and tested through simulation, demonstrating the suitability of the proposed framework for both fixed and mobile scenarios, providing efficient communication performance and low latency. The results have several practical implications for precision agriculture by providing an efficient monitoring and management solution for agricultural and livestock farms. Overall, this study provides a comprehensive evaluation of the performance scalability of WSNs in the agriculture sector, offering a new classification approach and performance assessment metrics for stationary and mobile scenarios in 6LowPAN networks. The results demonstrate the suitability of the proposed framework for precision agriculture, providing efficient communication performance and low latency.
This study presents a systematic approach that integrates the information adoption model (IAM) with topic modeling to analyze the digital voice of users in online open innovation communities (OOICs) and empirically examines the usefulness of UGC with large amounts of redundant information and varying content quality across two dimensions: information quality and information source credibility. A total of 61,227 bug comments were collected from the OOIC of Huawei EMUI and analyzed using binary logistic regression. The results show that information timeliness and completeness have a positive effect on the usefulness of UGC in OOICs; conversely, information semantics have a negative effect on the usefulness of UGC. Prior user experience has no influence on the usefulness of UGC in OOICs, while active user contribution has a positive effect on the usefulness of UGC. The results of this study offer several implications to researchers and practitioners, and thus could serve as a pivotal reference source for further investigation of potential determinants of UGC usefulness in OOICs.
This study presents a data analytics framework that aims to analyze topics and sentiments associated with COVID-19 vaccine misinformation in social media. A total of 40,359 tweets related to COVID-19 vaccination were collected between January 2021 and March 2021. Misinformation was detected using multiple predictive machine learning models. Latent Dirichlet Allocation (LDA) topic model was used to identify dominant topics in COVID-19 vaccine misinformation. Sentiment orientation of misinformation was analyzed using a lexicon-based approach. An independent-samples t-test was performed to compare the number of replies, retweets, and likes of misinformation with different sentiment orientations. Based on the data sample, the results show that COVID-19 vaccine misinformation included 21 major topics. Across all misinformation topics, the average number of replies, retweets, and likes of tweets with negative sentiment was 2.26, 2.68, and 3.29 times higher, respectively, than those with positive sentiment.
The explosive increase in educational data and information systems has led to new teaching practices, challenges, and learning processes. To effectively manage and analyze this information, it is crucial to adopt innovative methodologies and techniques. Recommender systems (RSs) offer a solution for advising students and guiding their learning journeys by utilizing statistical methods such as machine learning (ML) and graph analysis to analyze program and student data. This paper introduces an RS for advisors and students that analyzes student records to develop personalized study plans over multiple semesters. The proposed system integrates ideas from graph theory, performance modeling, ML, explainable recommendations, and an intuitive user interface. The system implicitly implements many academic rules through network analysis. Accordingly, a systematic and comprehensive review of different students’ plans was possible using metrics developed in the mathematical graph theory. The proposed system systematically assesses and measures the relevance of a particular student’s study plan. Experiments on datasets collected at the University of Dubai show that the model presented in this study outperforms similar ML-based solutions in terms of different metrics. Typically, up to 86% accuracy and recall have been achieved. Additionally, the lowest mean square regression (MSR) rate of 0.14 has been attained compared to other state-of-the-art regressors.
The increasing popularity of self-service analytics (SSA) is empowering business users to analyze data and generate actionable insights autonomously. While there are many benefits to SSA tools, there is a scarcity of research on the factors influencing their adoption in business organizations. This article presents an extended technology acceptance model (TAM) that incorporates the task-technology fit (TTF), compatibility, and user empowerment as critical antecedents of users' intention to adopt SSA tools for reporting and analytics tasks. To test the proposed model, data were collected through a questionnaire survey of 211 business users working in different industries in Jordan. The collected data were analysed using structural equation modeling (SEM). The results of this study demonstrate that the task-technology fit, compatibility, and user empowerment are significant predictors of users' perceptions of usefulness and ease of use of SSA tools. Both of perceived usefulness and perceived ease of use have a positive effect on users' intention to adopt SSA tools. Collectively, all these factors account for 51.6 percent of the variance in the behavioral intention. The findings of this study provide several key implications for research and practice, and thus should contribute to the design and adoption of more user-accepted SSA tools and applications.
With the increasing development of published literature, classification methods based on bibliometric information and traditional machine learning approaches encounter performance challenges related to overly coarse classifications and low accuracy. This study presents a deep learning approach for scientometric analysis and classification of scientific literature based on convolutional neural networks (CNN). Three dimensions, namely publication features, author features, and content features, were divided into explicit and implicit features to form a set of scientometric terms through explicit feature extraction and implicit feature mapping. The weighted scientometric term vectors are fitted into a CNN model to achieve dual-label classification of literature based on research content and methods. The effectiveness of the proposed model is demonstrated using an application example from the data science and analytics literature. The empirical results show that the scientometric classification model proposed in this study performs better than comparable machine learning classification methods in terms of precision, recognition, and F1-score. It also exhibits higher accuracy than deep learning classification based solely on explicit and dominant features. This study provides a methodological guide for fine-grained classification of scientific literature and a thorough investigation of its practice.
With the proliferation of big data and business analytics practices, data storytelling has gained increasing importance as an effective means for communicating analytical insights to the target audience to support decision-making and improve business performance. However, there is a limited empirical understanding of the relationship between data storytelling competency, decision-making quality, and business performance. Drawing on the resource-based view (RBV), this study develops and validates the concept of data storytelling competency as a multidimensional construct consisting of data quality, story quality, storytelling tool quality, storyteller skills, and storyteller domain knowledge. It also develops a mediation model to examine the relationship between data storytelling competency and business performance, and whether this relationship is mediated by decision-making quality. Based on an empirical analysis of data collected from business analytics practitioners, the results of this study reveal that the data storytelling competency is positively linked to business performance, which is partially mediated by decision-making quality. These results provide a theoretical basis for further investigation of possible antecedents and consequences of data storytelling competency. They also offer guidance for practitioners on how to leverage data storytelling capabilities in business analytics practices to improve decision-making and business performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.