ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 September 2020, Volume 57 Issue 9
Multi-Modality Fusion Perception and Computing in Autonomous Driving
Zhang Yanyong, Zhang Sha, Zhang Yu, Ji Jianmin, Duan Yifan, Huang Yitong, Peng Jie, Zhang Yuxiang
2020, 57(9):  1781-1799.  doi:10.7544/issn1000-1239.2020.20200255
Asbtract ( 2091 )   HTML ( 56)   PDF (7063KB) ( 894 )  
Related Articles | Metrics
The goal of autonomous driving is to provide a safe, comfortable and efficient driving environment for people. In order to have wide-spread deployment of autonomous driving systems, we need to process the sensory data from multiple streams in a timely and accurate fashion. The challenges that arise are thus two-fold: leveraging the multiple sensors that are available on autonomous vehicles to boost the perception accuracy; jointly optimizing perception models and the underlying computing models to meet the real-time requirements. To address these challenges, this paper surveys the latest research on sensing and edge computing for autonomous driving and presents our own autonomous driving system, Sonic. Specifically, we propose a multi-modality perception model, ImageFusion, that combines the lidar data and camera data for 3D object detection, and a computational optimization framework, MPInfer.
Edge Computing in Smart Homes
Huang Qianyi, Li Zhiyang, Xie Wentao, Zhang Qian
2020, 57(9):  1800-1809.  doi:10.7544/issn1000-1239.2020.20200253
Asbtract ( 2281 )   HTML ( 99)   PDF (2403KB) ( 1168 )  
Related Articles | Metrics
In recent years, smart speakers and robotic vacuum cleaners have played important roles in many peoples daily life. With the development in technology, more and more intelligent devices will become parts of home infrastructure, making life more convenient and comfortable for residents. When different types of specialized intelligent devices are connected and operated over the Internet, how to minimize network latency and guarantee data privacy are open issues. In order to solve these problems, edge computing in smart homes becomes the future trend. In this article, we present our research work along this direction, covering the topics on edge sensing, communication and computation. As for sensing, we focus on the pervasive sensing capability of the edge node and present our work on contactless breath monitoring; as for communication, we work on the joint design of sensing and communication, so that sensing and communication systems can work harmoniously on limited spectrum resources; as for computation, we devote our efforts to personalized machine learning at the edge, building personalized model for each individual while guaranteeing their data privacy.
CATS: Cost Aware Task Scheduling in Multi-Tier Computing Networks
Liu Zening, Li Kai, Wu Liantao, Wang Zhi, Yang Yang
2020, 57(9):  1810-1822.  doi:10.7544/issn1000-1239.2020.20200198
Asbtract ( 1645 )   HTML ( 29)   PDF (2103KB) ( 892 )  
Related Articles | Metrics
Due to more data and more powerful computing power and algorithms, IoT (Internet of things) applications are becoming increasingly intelligent, which are shifting from simple data sensing, collection, and representation tasks towards complex information extraction and analysis. The continuing trend requires multi-tier computing resources and networks. Multi-tier computing networks involve collaborations between cloud computing, fog computing, edge computing, and sea computing technologies, which have been developed for regional, local, and device levels, respectively. However, due to different features of computing technologies and diverse requirements of tasks, how to effectively schedule tasks is a key challenge in multi-tier computing networks. Besides, how to motivate multi-tier computing resources is also a key problem, which is the premise of the formation of multi-tier computing networks. To solve these challenges, in this paper, we propose a multi-tier computing network and a computation offloading system with hybrid cloud and fog, define a weighted cost function consisting of delay, energy, and payment, and formulate a cost aware task scheduling (CATS) problem. Furthermore, we propose a computation load based payment model to motivate cloud and fog, and include the payment related cost into the overall cost. To be specific, based on different features and requirements of cloud and fog, we propose a static payment model and a dynamic payment model for cloud and fog, respectively, which constitute the hybrid payment model. To solve CATS problem, we propose a potential game based analytic framework and develop a distributed task scheduling algorithm called CATS algorithm. Numerical simulation results show that CATS algorithm can offer the near-optimal performance in system average cost, and achieve more number of beneficial UEs (user equipment), compared with the centralized optimal method. Besides, it shows that the dynamic payment model may help fog obtain more income, compared with the static payment model.
Dynamic Task Offloading for Mobile Edge Computing with Green Energy
Ma Huirong, Chen Xu, Zhou Zhi, Yu Shuai
2020, 57(9):  1823-1838.  doi:10.7544/issn1000-1239.2020.20200184
Asbtract ( 1381 )   HTML ( 30)   PDF (2693KB) ( 903 )  
Related Articles | Metrics
Mobile edge computing (MEC) has recently emerged to fulfill the computation demands of richer applications, and provide better experience for resource-hungry Internet-of-Things (IoT) devices at the edge of mobile networks. It is readily acknowledged that edge infrastructures are less capable of improving power usage efficiency (PUE) and integrating renewable energy. Besides, due to the limited battery capacities of IoT devices, the task execution would be interrupted when the battery runs out. Therefore, it is crucial to use green energy to prolong the battery life-time. Moreover, IoT devices can share computation and communication resources dynamically and beneficially among each other. Therefore, we develop an efficient task offloading strategy in order to improve PUE of edge server as well as achieving green computing. We also propose a green task offloading framework which leverages energy harvesting (EH) and device-to-device communication (D2D). Our framework aims at minimizing the long-term grid power energy consumption of edge server and cloud resource rental costs for task executions of all EH IoT devices. Meanwhile, the incentive constraints of preventing the over-exploiting behaviors should be considered, since they harm devices’ motivation for collaboration. To address the uncertain future system information, such as the availability of renewable energy, we resort to Lyapunov optimization technique to propose an online task offloading algorithm, in which the decisions only depend on system current state information. The implementation of this algorithm only requires to solve a deterministic problem in each time slot, for which the core idea is to transform the task offloading problem of each time slot into a graph matching problem and get the approximate optimal solution by calling Edmonds’s Blossom algorithm. Rigorous theoretical analysis and extensive evaluations demonstrate the superior performance of the proposed scheme.
TensorFlow Lite: On-Device Machine Learning Framework
Li Shuangfeng
2020, 57(9):  1839-1853.  doi:10.7544/issn1000-1239.2020.20200291
Asbtract ( 3155 )   HTML ( 141)   PDF (1882KB) ( 3356 )  
Related Articles | Metrics
TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.
Robot 4.0: Continual Learning and Spatial-Temporal Intelligence Through Edge
Wang Zhigang, Wang Haitao, She Qi, Shi Xuesong, Zhang Yimin
2020, 57(9):  1854-1863.  doi:10.7544/issn1000-1239.2020.20200254
Asbtract ( 1296 )   HTML ( 38)   PDF (2293KB) ( 717 )  
Related Articles | Metrics
With the expansion of the global robot market, robotics is moving from the robot 3.0 era to the robot 4.0 era. In robot 4.0 era, robots should not only have the capability of perception and collaboration, but also have the capability of understanding the environment and making decisions by themselves just like human being. Then they can provide service to people autonomously. Although there have been many breakthroughs in deep learning, it is still a very challenging goal to make robots understand environment and make decisions like humans being. This paper explores three key technologies that are expected to solve these problems: continual learning, spatial-temporal intelligence, and edge computing. Continual learning enables robots to migrate the knowledge of old tasks to the knowledge of new tasks quickly without catastrophic forgotten problems; spatial-temporal intelligence enables robots to establish a bottom-up knowledge representation of the environment and to share and solve problems at different levels. Through edge computing, robots can get more cost-effective computation resource and integrate a variety of intelligence and knowledge easily. It is very useful for the large-scale deployment. These technologies are on the rise, and this paper is just a preliminary analysis.
Internet Data Transfer Protocol QUIC: A Survey
Li Xuebing, Chen Yang, Zhou Mengying, Wang Xin
2020, 57(9):  1864-1876.  doi:10.7544/issn1000-1239.2020.20190693
Asbtract ( 2205 )   HTML ( 91)   PDF (929KB) ( 1353 )  
Related Articles | Metrics
QUIC is an Internet data transfer protocol proposed by Google as an alternative for TCP (transmission control protocol). Compared with TCP, QUIC introduces lots of new features to make it theoretically outperform TCP in many fields. For example, it supports multiplexing to solve the problem of head-of-line blocking, introduces 0-RTT handshake to reduce handshake latency, and supports connection migration to be mobility-friendly. However, QUIC’s performance in the real world may not be as good as expected, because network environments and network devices are diverse and the protocol’s security is challenged by potential attackers. Therefore, evaluating QUIC’s impact on existing network services is quite important. This paper carries out a comprehensive survey of QUIC. We introduce the development history and the main characteristics of QUIC firstly. Secondly, taking the two most widely used application scenarios: Web browsing and video streaming as examples, we introduce and summarize domestic and international research analysis on the data transmission performance of QUIC under different network environments. Thirdly, we enumerate existing QUIC-enhancement work from the aspects of protocol design and system design. Fourthly, we summarize existing work on the security analysis on QUIC. We enumerate the security issues that are currently recognized by the academic community, as well as the researchers’ efforts to address these issues. Lastly, we come up with several potential improvements on existing research outcomes and look forward to new research topics and challenges brought by QUIC.
Multi-Source Remote Sensing Based Accurate Landslide Detection Leveraging Spatial-Temporal-Spectral Feature Fusion
Chen Shanjing, Xiang Chaocan, Kang Qing, Wu Tao, Liu Kai, Feng Liang, Deng Tao
2020, 57(9):  1877-1887.  doi:10.7544/issn1000-1239.2020.20190582
Asbtract ( 1036 )   HTML ( 19)   PDF (6438KB) ( 367 )  
Related Articles | Metrics
Accurate landslide detection is extremely important in emergency rescue. Aiming at the problems concerning current landslide remote sensing detection that the fusion and utilization of spatial, temporal, and spectral features are poor in target detection model and the accuracy of recognition is unsatisfied, in this paper, we propose an accurate landslide detection method based on multi-source remote sensing images, leveraging the fusion of spatial, temporal, and spectral features. In specific, we construct a new multi-bands remote sensing image dataset, exploiting the registration of spectral and scale spaces based on the remote sensing image before and after landslide. Moreover, we combine the features of temporal variation, spectrum and spatial shape, which are transformed into the spectral representational model. And then, the support vector machine (SVM) algorithm is used to identify the landslide objects based on this new image dataset and representational model. Furthermore, we use the typical shape features, such as the axial aspect ratio, area and invariant moment, which are extracted for the fundamental shape models of landslides, to further classify these landslide objects. Finally, we conduct extensive experiments to evaluate the performance of the propose method by comparing with baseline methods. The experimental results show that our method outperforms the baseline algorithms, while achieving up to 95% accuracy in landslide detection.
Non-Time-Switching Full-Duplex Relay System with SWIPT and Self-Energy Recycling
Zhou Yening, Li Taoshen, Wang Zhe, Xiao Nan
2020, 57(9):  1888-1897.  doi:10.7544/issn1000-1239.2020.20190590
Asbtract ( 720 )   HTML ( 9)   PDF (2131KB) ( 184 )  
Related Articles | Metrics
Adopting simultaneous wireless information and power transfer (SWIPT) technology to realize information and energy transmission based on radio frequency(RF) signals, a non-time-switching full-duplex relay system with SWIPT and self-energy recycling in radio frequency network is proposed. In this system, multiple idle wireless devices with energy are used as an additional energy access point (EAP), the energy-limited relay adopts the power splitting (PS) scheme, and information transmission, energy harvest and cooperative transmission are completed simultaneously in a slot. Taking maximize system throughput as an optimization target, the system transforms the original multivariate non-convex problem into a semi-definite programming problem by adopting quadratic optimization, semi-definite relaxation (SDR) and variable reduction methods, and utilizes Lagrange method to solve optimization problems. The system performance is improved by jointly optimizing the relay transmit power, the relay transmit beamforming vector and the power splitting ratio. Simulation experiments show that the throughput of the proposed system is better than the amplify-and-forward (AF) protocol under the decode-and-forward (DF) protocol. And when the energy harvested by the source node is limited, the stable and efficient operation of the system can be promoted by increasing the number of EAPs to increase the energy obtained by the system. Experimental results also verify that compared with half-duplex with SWIPT and full-duplex without SWIPT relay systems, the proposed system has more significant gains in improving system performance.
An Adaptive Repair Algorithm for AODV Routing Based on Decision Region
Liu Si, Zhang Degan, Liu Xiaohuan, Zhang Ting, Wu Hao
2020, 57(9):  1898-1910.  doi:10.7544/issn1000-1239.2020.20190508
Asbtract ( 707 )   HTML ( 15)   PDF (4490KB) ( 237 )  
Related Articles | Metrics
The significant advantages of ad hoc on-demand distance vector (AODV) in control overhead, energy consumption and bandwidth occupation make it widely used in mobile ad hoc networks (MANET). However, in the high-speed mobile environment such as emergency rescue and disaster relief, and in the environment with high requirements for delay, the self-repair of AODV routing has the problem of delay. In order to solve this problem and make the improved AODV routing protocol more suitable for the environment of rescue and relief, an adaptive repair algorithm for AODV routing based on decision region (AR-AODV) is proposed. Firstly, according to the characteristics of the network for the environment, i.e. the nodes are uniformly deployed, a search formula is proposed, and the optimal solution of the formula is obtained. Then, the condition threshold for initiating the self-repair process is determined. Finally, in order to reduce the control cost, an algorithm for determining the optimization area is given. The simulation results show that the repair algorithm improves the efficiency of the routing. The mobile devices such as vehicles that are uniformly deployed are taken as the network nodes. The adaptive repair algorithm is tested in the actual scene of rescue and relief environment. The results show that the algorithm is consistent with the simulation results, and the overall performance is improved significantly.
HYBRID-D2D-MIMO (HDM) Network Architecture and Hybrid Data Distributing Strategy (HDDS)
Zhou Yuxuan, Yang Xu, Qin Chuanyi, Yang Zhiwei, Zhu Yifeng, Duan Jin
2020, 57(9):  1911-1927.  doi:10.7544/issn1000-1239.2020.20190530
Asbtract ( 769 )   HTML ( 8)   PDF (7871KB) ( 77 )  
Related Articles | Metrics
With the advent of the 5G, the data transmission has begun moving toward a high data volume transfer and a low delay rate, making the related technologies and applications of IoT possible to increase. The new operating system (OS) and IoT information service system (ISS) that carry the future massive terminal devices have become a hot topic for researchers. In addition, as the key technology of the system architecture, the information service network (ISN) architecture and data transferring strategy (DTS) can determine the performance of the system. Based on the research of hybrid development technology and DSP heterogeneous information processing system, this paper proposes a HYBRID-D2D-MIMO (HDM) network architecture and hybrid data distributing strategy (HDDS) based on the high efficient DSP of HDDS in hybrid technology. The ISN and related DDS have the function to integrate into the Internet application ecosystem, and to simplify the operation of terminal device access to the network and to improve information transmission efficiency. The experimental results show that the data indicators perform well after combining with the HDM+HDDS, and the system has value for research and utilization.
Heterogeneous Information Networks Embedding Based on Multiple Meta-Graph Fusion
Wu Yao, Shen Derong, Kou Yue, Nie Tiezheng, Yu Ge
2020, 57(9):  1928-1938.  doi:10.7544/issn1000-1239.2020.20190553
Asbtract ( 903 )   HTML ( 26)   PDF (1701KB) ( 430 )  
Related Articles | Metrics
Network embedding methods based on meta-structures (such as meta-path or meta-graph) can effectively utilize heterogeneous network structures. Compared with the meta-path, the meta-graph can capture more complex structural information and help improve the accuracy of similar node matching in heterogeneous information networks. However, the existing meta-graph-based embedding method typically has the following limitations: 1)Most of the meta-graph types are specified by experts, and are not applicable in the application environment of large complex networks; 2)Although multiple meta-graphs are integrated for embedding, the weights of meta-graphs are not considered; 3)Some models use the users expected semantic relationship to generate a combination of meta-graphs that can preserve specific semantics, but such models are over-reliant on meta-pattern selection and samples used to supervise learning, lacking versatility. Based on this, this paper proposes a heterogeneous network embedding method based on multiple meta-graph fusion. The method includes two parts. The first part is graph discovery. The purpose of graph discovery is to mine important meta-graphs representing the current network structure and semantic features. The second part is node embedding based on multiple meta graph fusion. The main content is to propose a general graph similarity measure method based on meta-graphs, and use the neural network to embed the meta-graph features of nodes. Experimental results show that the proposed method has higher accuracy and efficiency compared with other network embedding methods.
Discovering Consistency Constraints for Associated Data on Heterogeneous Schemas
Du Yuefeng, Li Xiaoguang, Song Baoyan
2020, 57(9):  1939-1948.  doi:10.7544/issn1000-1239.2020.20190570
Asbtract ( 931 )   HTML ( 15)   PDF (2320KB) ( 270 )  
Related Articles | Metrics
Data consistency is a central issue of data quality management. With capability of expressing data relationship abstractly and formally, constraints are a technique for data consistency management. However, the diversity on heterogeneous schemas from multi-source brings great challenges to data consistency management, especially for constraints fusion. Besides, for both data from single-sources and multi-sources, they are related. These relationships can be used to strengthen the expression of constraints for semantics, which helps to probe potential data error. In practice, CINDs (conditional inclusion dependencies) and CCFDs (content-related conditional functional dependencies) are two effective techniques respectively for attributes match under heterogeneous schemas and consistency maintenance on content-related data. Based on this, we study how to discover consistency constraints for associated data on heterogeneous schemas. We firstly investigate the three fundamental problems related to CCFDs discovery. And we also illustrate that the implication, satisfiability and validation problems are NP-complete, coNP-complete, PTIME. Aiming at searching for the CCFDs in the space entirely, we present 2-level lattice according to the division between the conditional attribute set and the variable attribute set of CCFDs. After that an incremental method of discovering the fusion constraints over CINDs and CCFDs is proposed, which combines CCFDs on heterogeneous schemas via CINDs. Finally, our method is experimentally verified effectively and scalablely by using two real-life data.
Efficient Methods for Label-Constraint Reachability Query
Du Ming, Yang Yun, Zhou Junfeng, Chen Ziyang, Yang Anping
2020, 57(9):  1949-1960.  doi:10.7544/issn1000-1239.2020.20190569
Asbtract ( 784 )   HTML ( 7)   PDF (1604KB) ( 137 )  
Related Articles | Metrics
The label-constraint reachability query s→\-Lt is used to answer whether there is a directed path from s to t in the given graph, such that every label on the path belongs to L. Considering that existing approaches suffer from long index construct time, large index size and long query time, we first propose to construct a bidirectional path label index based on k nodes with large degrees, such that we can efficiently construct a smaller index, while at the same time covering a large portion of reachable information. For this large nodes based index, we propose several optimization techniques to reduce the index size, which can in turn speed up the processing of reachable queries. Further, even though the index size is small for large nodes based bidirectional path label, it does not cover all the reachable queries, and cannot avoid the graph traversal operation during query processing. To this end, we further propose a bidirectional path label index which is constructed based on all the nodes and therefore covers all the reachable information. Based on the bidirectional path label index, a label-constraint reachability query s→\-Lt can be answered by comparing the labels of the two query nodes s and t, and the traversal operation on the graph can be completely avoided during query processing. Finally, we conduct rich experiment study. In terms of index size, index construction time and query response time, the experimental results on multiple real datasets show that our approaches achieve smaller index size, and can construct the index and answer label-constraint reachability queries more efficiently compared with the state-of-the-art approaches.
Structural Integrity Checking Based on Logically Independent Fragment of Metadata
Zhao Xiaofei, Shi Zhongzhi, Liu Jianwei
2020, 57(9):  1961-1970.  doi:10.7544/issn1000-1239.2020.20190493
Asbtract ( 707 )   HTML ( 6)   PDF (1813KB) ( 164 )  
Related Articles | Metrics
Checking the structural integrity efficiently is one of the research hotspots in the field of MOF(meta object facility) repository system consistency. In this paper, we propose an efficient and automatic approach for checking the structural integrity by means of description logics. Firstly, according to the characteristics of MOF architecture, we study how to transform different levels of metadata into SROIQ(D) knowledge base. Then we study how to extract metadata to improve the efficiency of the checking process. We propose the concept of logically independent fragment of metadata. By extracting property deductive fragment and classification deductive fragment respectively, we present the algorithm to generate the minimum logically independent fragment. Since this kind of fragment is the closure of logical implication for a given metadata element, all relevant information about the given metadata element is completely preserved, thus the checking process can be performed on a smaller set of metadata rather than on the entire repository. Finally, we study how to perform checking based on logically independent fragment. The experimental results show that the average size of the metadata fragment generated by our approach is significantly smaller than its original size, and the efficiency improvement of the checking on the metadata fragment ranges from 1.47 times to 3.31 times. The time performance comparison with the related approaches also shows the effectiveness of our approach.
Interpretation and Understanding in Machine Learning
Chen Kerui, Meng Xiaofeng
2020, 57(9):  1971-1986.  doi:10.7544/issn1000-1239.2020.20190456
Asbtract ( 3354 )   HTML ( 136)   PDF (1315KB) ( 2401 )  
Related Articles | Metrics
In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
Keyword-Based Source Code Summarization
Zhang Shikun, Xie Rui, Ye Wei, Chen Long
2020, 57(9):  1987-2000.  doi:10.7544/issn1000-1239.2020.20190179
Asbtract ( 1479 )   HTML ( 37)   PDF (2189KB) ( 478 )  
Related Articles | Metrics
The summary of source code is a brief natural language description of the source code. The purpose of code summarization is to assist program understanding by automatically generating documentation, and it has potentials in many software engineering activities. The challenge of code summarization is that it resembles both machine translation and text summarization. The difficulty lies in how to better model code which is highly structural and has unlimited token vocabulary, and how to better filter key information in long code token sequence. Inspired by how humans write summaries and other related works, we propose a novel model called KBCoS (keyword-based source code summarization), which uses method signature and API call as keywords to enable the model to focus more on the key information in source code at each decoding step to generate summaries. In addition, to address the out-of-vocabulary (OOV) problem, we propose an algorithm called partial splitting, which means splitting a token into sub-tokens only when it is out of vocabulary. The algorithm is simple and effective, which can mitigate the conflict between the length of code token sequence and the number of OOV tokens. We use attention-based sequence-to-sequence model as the baseline and evaluate our approach in a public dataset of Java methods with corresponding API call sequences and summaries. The results show that both the keyword-based attention mechanism and partial splitting can improve the baseline in terms of BLEU-4, METEOR and ROUGE-L. Similar results can be found on another Python dataset. Furthermore, when combined KBCoS with TL-CodeSum, which is one of the state-of-the-art models for code summarization, KBCoS achieves the state-of-the-art result on this dataset, which indicates that our approach can help improve other models as well. Both the experimental results and the heat maps of attention weights demonstrate the effectiveness of our proposed model KBCoS.
Defending Against Dimensional Saddle Point Attack Based on Adaptive Method with Dynamic Bound
Li Dequan, Xu Yue, Xue Sheng
2020, 57(9):  2001-2008.  doi:10.7544/issn1000-1239.2020.20190462
Asbtract ( 713 )   HTML ( 12)   PDF (1861KB) ( 187 )  
Related Articles | Metrics
With the advent of the era of big data, distributed machine learning has been widely applied to process massive data. The most commonly used one is the distributed stochastic gradient descent algorithm, but it is vulnerable to different types of Byzantine attacks. In order to maximize the elastic limit to defend against attacks and optimize objective function in the distributed dimensional Byzantine environment based on the gradient update rule, firstly a new Byzantine attack method—saddle point attack is proposed in this paper. Contrasting with the adaptive non-adaptive methods, the adaptation with dynamic bound escapes the saddle point fast when the objective function is stuck in the saddle point. The comparative experiment is made on the classification of data sets. Secondly, an aggregation rule Saddle(·) for filtering Byzantine agents is proposed, and it is proved that the rule is the dimensional Byzantine resilience. Therefore, in the distributed dimensional Byzantine environment, the adaptive optimization method with dynamic bound combined with the aggregation rule Saddle(·) can effectively defend against the saddle point attack. Finally, the error rate of the data set classification in the experimental results is compared to analyze the advantages and disadvantages of the adaptation with dynamic bound over the adaptive and non-adaptive methods. The result shows that the adaptation with dynamic bound combined with the aggregation rule Saddle(·) is less affected by the saddle point attack in the distributed dimensional Byzantine environment.