WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS

An efficient privacy-preserving blockchain storage method for internet of things environment
Jia D, Yang G, Huang M, Xin J, Wang G and Yuan GY
Blockchain is a key technology to realize decentralized trust management. In recent studies, sharding-based blockchain models are proposed and applied to the resource-constrained Internet of Things (IoT) scenario, and machine learning-based models are presented to improve the query efficiency of the sharding-based blockchains by classifying hot data and storing them locally. However, in some scenarios, these presented blockchain models cannot be deployed because the block features used as input in the learning method are privacy. In this paper, we propose an efficient privacy-preserving blockchain storage method for the IoT environment. The new method classifies hot blocks based on the federated extreme learning machine method and saves the hot blocks through one of the sharded blockchain models called ElasticChain. The features of hot blocks will not be read by other nodes in this method, and user privacy is effectively protected. Meanwhile, hot blocks are saved locally, and data query speed is improved. Furthermore, in order to comprehensively evaluate a hot block, five features of hot blocks are defined, including objective feature, historical popularity, potential popularity, storage requirements and training value. Finally, the experimental results on synthetic data demonstrate the accuracy and efficiency of the proposed blockchain storage model.
Transfer learning based cascaded deep learning network and mask recognition for COVID-19
Li F, Wang X, Sun Y, Li T and Ge J
The COVID-19 is still spreading today, and it has caused great harm to human beings. The system at the entrance of public places such as shopping malls and stations should check whether pedestrians are wearing masks. However, pedestrians often pass the system inspection by wearing cotton masks, scarves, etc. Therefore, the detection system not only needs to check whether pedestrians are wearing masks, but also needs to detect the type of masks. Based on the lightweight network architecture MobilenetV3, this paper proposes a cascaded deep learning network based on transfer learning, and then designs a mask recognition system based on the cascaded deep learning network. By modifying the activation function of the MobilenetV3 output layer and the structure of the model, two MobilenetV3 networks suitable for cascading are obtained. By introducing transfer learning into the training process of two modified MobilenetV3 networks and a multi-task convolutional neural network, the ImagNet underlying parameters of the network models are obtained in advance, which reduces the computational load of the models. The cascaded deep learning network consists of a multi-task convolutional neural network cascaded with these two modified MobilenetV3 networks. A multi-task convolutional neural network is used to detect faces in images, and two modified MobilenetV3 networks are used as the backbone network to extract the features of masks. After comparing with the classification results of the modified MobilenetV3 neural network before cascading, the classification accuracy of the cascading learning network is improved by 7%, and the excellent performance of the cascading network can be seen.
A DRL-based online VM scheduler for cost optimization in cloud brokers
Li X, Pan L and Liu S
The virtual machine (VM) scheduling problem in cloud brokers that support cloud bursting is fraught with uncertainty due to the on-demand nature of Infrastructure as a Service (IaaS) VMs. Until a VM request is received, the scheduler does not know in advance when it will arrive or what configurations it demands. Even when a VM request is received, the scheduler does not know when the VM's lifecycle expires. Existing studies begin to use deep reinforcement learning (DRL) to solve such scheduling problems. However, they do not address how to guarantee the QoS of user requests. In this paper, we investigate a cost optimization problem for online VM scheduling in cloud brokers for cloud bursting to minimize the cost spent on public clouds while satisfying specified QoS restrictions. We propose DeepBS, a DRL-based online VM scheduler in a cloud broker which learns from experience to adaptively improve scheduling strategies in environments with non-smooth and uncertain user requests. We evaluate the performance of DeepBS under two request arrival patterns which are respectively based on Google and Alibaba cluster traces, and the experiments show that DeepBS has a significant advantage over other benchmark algorithms in terms of cost optimization.
PerHeFed: A general framework of personalized federated learning for heterogeneous convolutional neural networks
Ma L, Liao Y, Zhou B and Xi W
In conventional federated learning, each device is restricted to train a network model of the same structure. This greatly hinders the application of federated learning where the data and devices are quite heterogeneous because of their different hardware equipment and communication networks. At the same time, existing studies have shown that transmitting all of the model parameters not only has heavy communication costs, but also increases risk of privacy leakage. We propose a general framework for personalized federated learning (PerHeFed), which enables the devices to design their local model structures autonomously and share sub-models without structural restrictions. In PerHeFed, a simple-but-effective mapping relation and a novel personalized sub-model aggregation method are proposed for heterogeneous sub-models to be aggregated. By dividing the aggregations into two primitive types (i.e., inter-layer and intra-layer), PerHeFed is applicable to any combination of heterogeneous convolutional neural networks, and we believe that this can satisfy the personalized requirements of heterogeneous models. Experiments show that, compared to the state-of-the-art method (e.g., FLOP), in non-IID data sets our method compress ≈ 50 of the shared sub-model parameters with only a 4.38% drop in accuracy on SVHN dataset and on CIFAR-10, PerHeFed even achieves a 0.3% improvement in accuracy. To the best of our knowledge, our work is the first general personalized federated learning framework for heterogeneous convolutional networks, even cross different networks, addressing model structure unity in conventional federated learning.
Modeling the social influence of COVID-19 via personalized propagation with deep learning
Liu Y, Cao J, Wu J and Pi D
Social influence prediction has permeated many domains, including marketing, behavior prediction, recommendation systems, and more. However, traditional methods of predicting social influence not only require domain expertise, they also rely on extracting user features, which can be very tedious. Additionally, graph convolutional networks (GCNs), which deals with graph data in non-Euclidean space, are not directly applicable to Euclidean space. To overcome these problems, we extended DeepInf such that it can predict the social influence of COVID-19 via the transition probability of the page rank domain. Furthermore, our implementation gives rise to a deep learning-based personalized propagation algorithm, called DeepPP. The resulting algorithm combines the personalized propagation of a neural prediction model with the approximate personalized propagation of a neural prediction model from page rank analysis. Four social networks from different domains as well as two COVID-19 datasets were used to analyze the proposed algorithm's efficiency and effectiveness. Compared to other baseline methods, DeepPP provides more accurate social influence predictions. Further, experiments demonstrate that DeepPP can be applied to real-world prediction data for COVID-19.
EAGS: An extracting auxiliary knowledge graph model in multi-turn dialogue generation
Ning B, Zhao D, Liu X and Li G
Multi-turn dialogue generation is an essential and challenging subtask of text generation in the question answering system. Existing methods focused on extracting latent topic-level relevance or utilizing relevant external background knowledge. However, they are prone to ignore the fact that relying too much on latent aspects will lose subjective key information. Furthermore, there is not so much relevant external knowledge that can be used for referencing or a graph that has complete entity links. Dependency tree is a special structure that can be extracted from sentences, it covers the explicit key information of sentences. Therefore, in this paper, we proposed the EAGS model, which combines the subjective pivotal information from the explicit dependency tree with sentence implicit semantic information. The EAGS model is a knowledge graph enabled multi-turn dialogue generation model, and it doesn't need extra external knowledge, it can not only extract and build a dependency knowledge graph from existing sentences, but also prompt the node representation, which is shared with Bi-GRU each time step word embedding in node semantic level. We store the specific domain subgraphs built by the EAGS, which can be retrieved as external knowledge graph in the future multi-turn dialogue generation task. We design a multi-task training approach to enhance semantics and structure local feature extraction, and balance with the global features. Finally, we conduct experiments on Ubuntu large-scale English multi-turn dialogue community dataset and English Daily dialogue dataset. Experiment results show that our EAGS model performs well on both automatic evaluation and human evaluation compared with the existing baseline models.
Event prediction from news text using subgraph embedding and graph sequence mining
Cekinel RF and Karagoz P
Event detection from textual content by using text mining concepts is a well-researched field in the literature. On the other hand, graph modeling and graph embedding techniques in recent years provide an opportunity to represent textual contents as graphs. Text can be enriched with additional attributes in graphs, and the complex relationships can be captured within the graphs. In this paper, we focus on news prediction and model the problem as subgraph prediction. More specifically, we aim to predict the news skeleton in the form of a subgraph. To this aim, graph-based representations of news articles are constructed and a graph mining based pattern extraction method is proposed. The proposed method consists of three main steps. Initially, graph representation of the news text is constructed. Afterwards, frequent subgraph mining and sequential rule mining algorithms are adapted for pattern prediction on graph sequences. We consider that a subgraph captures the main story of the contents, and the sequential rules indicate the subgraph patterns' temporal relationships. Finally, extracted sequential patterns are used for predicting the future news skeleton (i.e. main features of the news). In order to measure the similarity, graph embedding techniques are also employed. The proposed method is analyzed on both a collection of news from an online newspaper and on a benchmark news dataset against baseline methods.
Automated post scoring: evaluating posts with topics and quoted posts in online forum
Yang R, Cao J, Wen Z and Shen J
Online forumpost evaluationis an effective way for instructors to assess students' knowledge understanding and writing mechanics. Manually evaluating massive posts costs a lot of time. Automatically grading online posts could significantly alleviate instructors' burden. Similar text assessment tasks like Automated Text Scoring evaluate the writing quality of independent texts or relevance between text and prompt. And Automatic Short Answer Grading measures the semantic matching of short answers according to given problems and correct answers. Different from existing tasks, we propose a novel task, Automated Post Scoring (APS), which grades all online discussion posts in each thread of each student with given topics and quoted posts. APS evaluates not only the writing quality of posts automatically but also the relevance to topics. To measure the relevance, we model the semantic consistency between posts and topics. Supporting arguments are also extracted from quoted posts to enhance posts evaluation. Specifically, we propose a mixture model including a hierarchical text model to measure the writing quality, a semantic matching model to model topic relevance, and a semantic representation model to integrate quoted posts. We also construct a new dataset called Online Discussion Dataset containing 2,542 online posts from 694 students of a social science course. The proposed models are evaluated on the dataset with correlation and residual based evaluation metrics. Compared with measuring posts alone, experimental results demonstrate that incorporating topics and quoted posts could improve the performance of APS by a large margin, more than 9 percent on QWK.
Identifying informative tweets during a pandemic via a topic-aware neural language model
Gao W, Li L, Tao X, Zhou J and Tao J
Every epidemic affects the real lives of many people around the world and leads to terrible consequences. Recently, many tweets about the COVID-19 pandemic have been shared publicly on social media platforms. The analysis of these tweets is helpful for emergency response organizations to prioritize their tasks and make better decisions. However, most of these tweets are non-informative, which is a challenge for establishing an automated system to detect useful information in social media. Furthermore, existing methods ignore unlabeled data and topic background knowledge, which can provide additional semantic information. In this paper, we propose a novel Topic-Aware BERT (TABERT) model to solve the above challenges. TABERT first leverages a topic model to extract the latent topics of tweets. Secondly, a flexible framework is used to combine topic information with the output of BERT. Finally, we adopt adversarial training to achieve semi-supervised learning, and a large amount of unlabeled data can be used to improve inner representations of the model. Experimental results on the dataset of COVID-19 English tweets show that our model outperforms classic and state-of-the-art baselines.
Leverage knowledge graph and GCN for fine-grained-level clickbait detection
Zhou M, Xu W, Zhang W and Jiang Q
Clickbait is the use of an enticing title as bait to deceive users to click. However, the corresponding content is often disappointing, infuriating or even deceitful. This practice has brought serious damage to our social trust, especially to online media, which is one of the most important channels for information acquisition in our daily life. Currently, clickbait is spreading on the internet and causing serious damage to society. However, research on clickbait detection has not yet been well performed. Almost all existing research treats clickbait detection as a binary classification task and only uses the title as the input. This shallow usage of information and detection technology not only suffers from low performance in real detection (e.g., it is easy to bypass) but is also difficult to use in further research (e.g., potential empirical studies). In this work, we proposed a novel clickbait detection model that incorporated a knowledge graph, a graph convolutional network and a graph attention network to conduct fine-grained-level clickbait detection. According to experiments using a real dataset, our novel proposed model outperformed classical and state-of-the-art baselines. In addition, certain explainability can also be achieved in our model through the graph attention network. Our fine-grained-level results can provide a measurement foundation for future empirical study. To the best of our knowledge, this is the first attempt to incorporate a knowledge graph and deep learning technique to detect clickbait and achieve explainability.
Explainable recommendation: when design meets trust calibration
Naiseh M, Al-Thani D, Jiang N and Ali R
Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.
CrowdMed-II: a blockchain-based framework for efficient consent management in health data sharing
Hu C, Li C, Zhang G, Lei Z, Shah M, Zhang Y, Xing C, Jiang J and Bao R
The healthcare industry faces serious problems with health data. Firstly, health data is fragmented and its quality needs to be improved. Data fragmentation means that it is difficult to integrate the patient data stored by multiple health service providers. The quality of these heterogeneous data also needs to be improved for better utilization. Secondly, data sharing among patients, healthcare service providers and medical researchers is inadequate. Thirdly, while sharing health data, patients' right to privacy must be protected, and patients should have authority over who can access their data. In traditional health data sharing system, because of centralized management, data can easily be stolen, manipulated. These systems also ignore patient's authority and privacy. Researchers have proposed some blockchain-based health data sharing solutions where blockchain is used for consensus management. Blockchain enables multiple parties who do not fully trust each other to exchange their data. However, the practice of smart contracts supporting these solutions has not been studied in detail. We propose CrowdMed-II, a health data management framework based on blockchain, which could address the above-mentioned problems of health data. We study the design of major smart contracts in our framework and propose two smart contract structures. We also introduce a novel search contract for searching patients in the framework. We evaluate their efficiency based on the execution costs on Ethereum. Our design improves on those previously proposed, lowering the computational costs of the framework. This allows the framework to operate at scale and is more feasible for widespread adoption.
Improving medical experts' efficiency of misinformation detection: an exploratory study
Nabożny A, Balcerzak B, Morzy M, Wierzbicki A, Savov P and Warpechowski K
Fighting medical disinformation in the era of the pandemic is an increasingly important problem. Today, automatic systems for assessing the credibility of medical information do not offer sufficient precision, so human supervision and the involvement of medical expert annotators are required. Our work aims to optimize the utilization of medical experts' time. We also equip them with tools for semi-automatic initial verification of the credibility of the annotated content. We introduce a general framework for filtering medical statements that do not require manual evaluation by medical experts, thus focusing annotation efforts on non-credible medical statements. Our framework is based on the construction of filtering classifiers adapted to narrow thematic categories. This allows medical experts to fact-check and identify over two times more non-credible medical statements in a given time interval without applying any changes to the annotation flow. We verify our results across a broad spectrum of medical topic areas. We perform quantitative, as well as exploratory analysis on our output data. We also point out how those filtering classifiers can be modified to provide experts with different types of feedback without any loss of performance.
Auxiliary signal-guided knowledge encoder-decoder for medical report generation
Li M, Liu R, Wang F, Chang X and Liang X
Medical reports have significant clinical value to radiologists and specialists, especially during a pandemic like COVID. However, beyond the common difficulties faced in the natural image captioning, medical report generation specifically requires the model to describe a medical image with a fine-grained and semantic-coherence paragraph that should satisfy both medical commonsense and logic. Previous works generally extract the global image features and attempt to generate a paragraph that is similar to referenced reports; however, this approach has two limitations. Firstly, the regions of primary interest to radiologists are usually located in a small area of the global image, meaning that the remainder parts of the image could be considered as irrelevant noise in the training procedure. Secondly, there are many similar sentences used in each medical report to describe the normal regions of the image, which causes serious data bias. This deviation is likely to teach models to generate these inessential sentences on a regular basis. To address these problems, we propose an Auxiliary Signal-Guided Knowledge Encoder-Decoder (ASGK) to mimic radiologists' working patterns. Specifically, the auxiliary patches are explored to expand the widely used visual patch features before fed to the Transformer encoder, while the external linguistic signals help the decoder better master prior knowledge during the pre-training process. Our approach performs well on common benchmarks, including CX-CHR, IU X-Ray, and COVID-19 CT Report dataset (COV-CTR), demonstrating combining auxiliary signals with transformer architecture can bring a significant improvement in terms of medical report generation. The experimental results confirm that auxiliary signals driven Transformer-based models are with solid capabilities to outperform previous approaches on both medical terminology classification and paragraph generation metrics.
Sentiment analysis and topic modeling for COVID-19 vaccine discussions
Yin H, Song X, Yang S and Li J
The outbreak of the novel coronavirus disease (COVID-19) has been ongoing for almost two years and has had an unprecedented impact on the daily lives of people around the world. More recently, the emergence of the Delta variant of COVID-19 has once again put the world at risk. Fortunately, many countries and companies have developed vaccines for the coronavirus. As of 23 August 2021, more than 20 vaccines have been approved by the World Health Organization (WHO), bringing light to people besieged by the pandemic. The global rollout of the COVID-19 vaccine has sparked much discussion on social media platforms, such as the effectiveness and safety of the vaccine. However, there has not been much systematic analysis of public opinion on the COVID-19 vaccine. In this study, we conduct an in-depth analysis of the discussions related to the COVID-19 vaccine on Twitter. We analyze the hot topics discussed by people and the corresponding emotional polarity from the perspective of countries and vaccine brands. The results show that most people trust the effectiveness of vaccines and are willing to get vaccinated. In contrast, negative tweets tended to be associated with news reports of post-vaccination deaths, vaccine shortages, and post-injection side effects. Overall, this study uses popular Natural Language Processing (NLP) technologies to mine people's opinions on the COVID-19 vaccine on social media and objectively analyze and visualize them. Our findings can improve the readability of the confusing information on social media platforms and provide effective data support for the government and policy makers.
Explainable depression detection with multi-aspect features using a hybrid deep learning model on social media
Zogan H, Razzak I, Wang X, Jameel S and Xu G
The ability to explain why the model produced results in such a way is an important problem, especially in the medical domain. Model explainability is important for building trust by providing insight into the model prediction. However, most existing machine learning methods provide no explainability, which is worrying. For instance, in the task of automatic depression prediction, most machine learning models lead to predictions that are obscure to humans. In this work, we propose explainable Multi-Aspect Depression Detection with Hierarchical Attention Network , for automatic detection of depressed users on social media and explain the model prediction. We have considered user posts augmented with additional features from Twitter. Specifically, we encode user posts using two levels of attention mechanisms applied at the tweet-level and word-level, calculate each tweet and words' importance, and capture semantic sequence features from the user timelines (posts). Our hierarchical attention model is developed in such a way that it can capture patterns that leads to explainable results. Our experiments show that outperforms several popular and robust baseline methods, demonstrating the effectiveness of combining deep learning with multi-aspect features. We also show that our model helps improve predictive performance when detecting depression in users who are posting messages publicly on social media. achieves excellent performance and ensures adequate evidence to explain the prediction.
A generalized multi-skill aggregation method for cognitive diagnosis
Zhang S, Huang S, Yu X, Chen E, Wang F and Huang Z
Online education brings more possibilities for personalized learning, in which identifying the cognitive state of learners is conducive to better providing learning services. Cognitive diagnosis is an effective measurement to assess the cognitive state of students through response data of answering the problems(e.g., right or wrong). Generally, the cognitive diagnosis framework includes the mastery of skills required by a specified problem and the aggregation of skills. The current multi-skill aggregation methods are mainly divided into conjunctive and compensatory methods and generally considered that each skill has the same effect on the correct response. However, in practical learning situations, there may be more complex interactions between skills, in which each skill has different weight impacting the final result. To this end, this paper proposes a generalized multi-skill aggregation method based on the Sugeno integral (SI-GAM) and introduces fuzzy measures to characterize the complex interactions between skills. We also provide a new idea for modeling multi-strategy problems. The cognitive diagnosis process is implemented by a more general and interpretable aggregation method. Finally, the feasibility and effectiveness of the model are verified on synthetic and real-world datasets.
A lightweight automatic sleep staging method for children using single-channel EEG based on edge artificial intelligence
Zhu L, Wang C, He Z and Zhang Y
With the development of telemedicine and edge computing, edge artificial intelligence (AI) will become a new development trend for smart medicine. On the other hand, nearly one-third of children suffer from sleep disorders. However, all existing sleep staging methods are for adults. Therefore, we adapted edge AI to develop a lightweight automatic sleep staging method for children using single-channel EEG. The trained sleep staging model will be deployed to edge smart devices so that the sleep staging can be implemented on edge devices which will greatly save network resources and improving the performance and privacy of sleep staging application. Then the results and hypnogram will be uploaded to the cloud server for further analysis by the physicians to get sleep disease diagnosis reports and treatment opinions. We utilized 1D convolutional neural networks (1D-CNN) and long short term memory (LSTM) to build our sleep staging model, named CSleepNet. We tested the model on our childrens sleep (CS) dataset and sleep-EDFX dataset. For the CS dataset, we experimented with F4-M1 channel EEG using four different loss functions, and the logcosh performed best with overall accuracy of 83.06% and F1-score of 76.50%. We used Fpz-Cz and Pz-Oz channel EEG to train our model in Sleep-EDFX dataset, and achieved an accuracy of 86.41% without manual feature extraction. The experimental results show that our method has great potential. It not only plays an important role in sleep-related research, but also can be widely used in the classification of other time sequences physiological signals.
Multi-task hourglass network for online automatic diagnosis of developmental dysplasia of the hip
Xu J, Xie H, Tan Q, Wu H, Liu C, Zhang S, Mao Z and Zhang Y
Developmental dysplasia of the hip (DDH) is one of the most common diseases in children. Due to the experience-requiring medical image analysis work, online automatic diagnosis of DDH has intrigued the researchers. Traditional implementation of online diagnosis faces challenges with reliability and interpretability. In this paper, we establish an online diagnosis tool based on a multi-task hourglass network, which can accurately extract landmarks to detect the extent of hip dislocation and predict the age of the femoral head. Our method utilizes a multi-task hourglass network, which trains an encoder-decoder network to regress the landmarks and predict the developmental age for online DDH diagnosis. With the support of precise image analysis and fast GPU computing, our method can help overcome the shortage of medical resources and enable telehealth for DDH diagnosis. Applying this approach to a dataset of DDH X-ray images, we demonstrate 4.64 mean pixel error of landmark detection compared to the results of human experts. Moreover, we can improve the accuracy of the age prediction of femoral heads to 89. Our online automatic diagnosis system has provided service to 112 patients, and the results demonstrate the effectiveness of our method.
A customisable pipeline for the semi-automated discovery of online activists and social campaigns on Twitter
Primo F, Romanovsky A, de Mello R, Garcia A and Missier P
Substantial research is available on detecting on social media platforms. In contrast, comparatively few studies exists on the role of , defined informally as users who actively participate in socially-minded online campaigns. Automatically discovering activists who can potentially be approached by organisations that promote social campaigns is important, but not easy, as they are typically active only locally, and, unlike influencers, they are not central to large social media networks. We make the hypothesis that such interesting users can be found on Twitter within temporally and spatially localised . We define these as small but topical fragments of the network, containing interactions about social events or campaigns with a significant online footprint. To explore this hypothesis, we have designed an iterative discovery pipeline consisting of two alternating phases of user discovery and context discovery. Multiple iterations of the pipeline result in a growing dataset of user profiles for activists, as well as growing set of online social contexts. This mode of exploration differs significantly from prior techniques that focus on influencers, and presents unique challenges because of the weak online signal available to detect activists. The paper describes the design and implementation of the pipeline as a customisable software framework, where user-defined operational definitions of online activism can be explored. We present an empirical evaluation on two extensive case studies, one concerning healthcare-related campaigns in the UK during 2018, the other related to online activism in Italy during the COVID-19 pandemic.
Clustering-enhanced stock price prediction using deep learning
Li M, Zhu Y, Shen Y and Angelova M
In recent years, artificial intelligence technologies have been successfully applied in time series prediction and analytic tasks. At the same time, a lot of attention has been paid to financial time series prediction, which targets the development of novel deep learning models or optimize the forecasting results. To optimize the accuracy of stock price prediction, in this paper, we propose a clustering-enhanced deep learning framework to predict stock prices with three matured deep learning forecasting models, such as Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU). The proposed framework considers the clustering as the forecasting pre-processing, which can improve the quality of the training models. To achieve the effective clustering, we propose a new similarity measure, called Logistic Weighted Dynamic Time Warping (LWDTW), by extending a Weighted Dynamic Time Warping (WDTW) method to capture the relative importance of return observations when calculating distance matrices. Especially, based on the empirical distributions of stock returns, the cost weight function of WDTW is modified with logistic probability density distribution function. In addition, we further implement the clustering-based forecasting framework with the above three deep learning models. Finally, extensive experiments on daily US stock price data sets show that our framework has achieved excellent forecasting performance with overall best results for the combination of Logistic WDTW clustering and LSTM model using 5 different evaluation metrics.