The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[April 23, 2019] We announced The 2nd JIPS Survey Paper Awards. Please refer to here for details.
[Jan. 23, 2018] Call for papers about JIPS Future Topic Track - Special Section scheduled in 2019 are registered. Please refer to here for details.
[Nov. 16, 2018] JIPS committee has made a decision for the article processing charge (APC), thus the new
policy applies to all published papers after January 1, 2019. For more information, click here.
Journal of Information Processing Systems, Vol. 15, No.2, 2019
Internet of Things (IoT) technology has been recently utilized in diverse fields. Smart city is one of the IoT application domains with a lot of research topics and which is operated by integrated IoT applications. In this paper, diverse kinds of solutions, processes, and frameworks to address the existing challenges in information technology are introduced. Such solutions involve various future track topics including blockchain, security, steganography, optimization, machine learning, smart system, and so on. In the subsequent paragraphs, we describe each topic in a summarized way in terms of the existing challenges and their solutions. Specifically, this paper introduced 18 novel and enhanced research studies from different countries in the world. We present diverse kinds of paradigms to subjects that tackle diverse kinds of research areas such as IoT and Smart City, and so on.
An individual’s health data is very sensitive and private. Such data are usually stored on a private or community owned cloud, where access is not restricted to the owners of that cloud. Anyone within the cloud can access this data. This data may not be read only and multiple parties can make to it. Thus, any unauthorized modification of health-related data will lead to incorrect diagnosis and mistreatment. However, we cannot restrict semipublic access to this data. Existing security mechanisms in e-health systems are competent in dealing with the issues associated with these systems but only up to a certain extent. The indigenous technologies need to be complemented with current and future technologies. We have put forward a method to complement such technologies by incorporating the concept of blockchain to ensure the integrity of data as well as its provenance.
In order to ensure second-order multi-agent systems (MAS) realizing consensus more quickly in a limited time, a new protocol is proposed. In this new protocol, the gradient algorithm of the overall cost function is introduced in the original protocol to enhance the connection between adjacent agents and improve the moving speed of each agent in the MAS. Utilizing Lyapunov stability theory, graph theory and homogeneity theory, sufficient conditions and detailed proof for achieving a finite-time consensus of the MAS are given. Finally, MAS with three following agents and one leading agent is simulated. Moreover, the simulation results indicated that this new protocol could make the system more stable, more robust and convergence faster when compared with other protocols.
In order to protect secret digital documents against vulnerabilities while communicating, steganography
algorithms are applied. It protects a digital file from unauthorized access by hiding the entire content. Pixelvalue-
difference being a method from spatial domain steganography utilizes the difference gap between
neighbor pixels to fulfill the same. The proposed approach is a block-wise embedding process where blocks of
variable size are chosen from the cover image, therefore, a stream of secret digital contents is hidden. Least
significant bit (LSB) substitution method is applied as an adaptive mechanism and optimal pixel adjustment
process (OPAP) is used to minimize the error rate. The proposed application succeeds to maintain good hiding
capacity and better signal-to-noise ratio when compared against other existing methods. Any means of digital
communication specially e-Governance applications could be highly benefited from this approach.
The synchronization scheme based on moving average is robust and suitable for the same rule to be adopted in
embedding watermark and synchronization code, but the imperceptibility and search efficiency is seldom
reported. The study aims to improve the original scheme for robust audio watermarking. Firstly, the survival of
the algorithm from desynchronization attacks is improved. Secondly, the scheme is improved in inaudibility.
Objective difference grade (ODG) of the marked audio is significantly changed. Thirdly, the imperceptibility of
the scheme is analyzed and the derived result is close to experimental result. Fourthly, the selection of parameters
is optimized based on experimental data. Fifthly, the search efficiency of the scheme is compared with those of
other synchronization code schemes. The experimental results show that the proposed watermarking scheme
allows the high audio quality and is robust to common attacks such as additive white Gaussian noise,
requantization, resampling, low-pass filtering, random cropping, MP3 compression, jitter attack, and time scale
modification. Moreover, the algorithm has the high search efficiency and low false alarm rate.
The growth of telemedicine-based wireless communication for images—magnetic resonance imaging (MRI)
and computed tomography (CT)—leads to the necessity of learning the concept of image compression. Over
the years, the transform based and spatial based compression techniques have attracted many types of
researches and achieve better results at the cost of high computational complexity. In order to overcome this,
the optimization techniques are considered with the existing image compression techniques. However, it fails
to preserve the original content of the diagnostic information and cause artifacts at high compression ratio.
In this paper, the concept of histogram based multilevel thresholding (HMT) using entropy is appended with
the optimization algorithm to compress the medical images effectively. However, the method becomes time
consuming during the measurement of the randomness from the image pixel group and not suitable for
medical applications. Hence, an attempt has been made in this paper to develop an HMT based image
compression by utilizing the opposition based improved harmony search algorithm (OIHSA) as an
optimization technique along with the entropy. Further, the enhancement of the significant information
present in the medical images are improved by the proper selection of entropy and the number of thresholds
chosen to reconstruct the compressed image.
The prediction of the sum of container is very important in the field of container transport. Many influencing
factors can affect the prediction results. These factors are usually composed of many variables, whose
composition is often very complex. In this paper, we use gray relational analysis to set up a proper forecast
index system for the prediction of the sum of containers in foreign trade. To address the issue of the low
accuracy of the traditional prediction models and the problem of the difficulty of fully considering all the factors
and other issues, this paper puts forward a prediction model which is combined with a back-propagation (BP)
neural networks and the support vector machine (SVM). First, it gives the prediction with the data normalized
by the BP neural network and generates a preliminary forecast data. Second, it employs SVM for the residual
correction calculation for the results based on the preliminary data. The results of practical examples show that
the overall relative error of the combined prediction model is no more than 1.5%, which is less than the relative
error of the single prediction models. It is hoped that the research can provide a useful reference for the
prediction of the sum of container and related studies.
This study analyzed changes in sociality and democratic-citizenship among elementary school students in the
information class and the science class at the Science Education Institute for the Gifted, who were divided into
an experimental group and a control group. The experimental group engaged in the Learning Together (LT)
cooperative form of learning for which the remix function of Scratch, an educational programming language,
was applied, while the control group was given general instructor-led lessons. Members in the experimental
group were able to modify processes during projects through the usage of the remix function, thereby actively
participating in the projects and eventually generating team-based results. The post-class t-tests showed a
greater degree of improvements in sociality and democratic citizenship for the experimental group that was
offered the remix-function-based cooperative learning than the control group. Statistically significant
differences were present between two groups particularly in “cooperative spirit” sub-domain of sociality and
the “community” and “responsibility” sub-domains of democratic citizenship.
Dynamic thermal rating of the overhead transmission lines is affected by many uncertain factors. The ambient
temperature, wind speed and wind direction are the main sources of uncertainty. Measurement uncertainty is
an important parameter to evaluate the reliability of measurement results. This paper presents the uncertainty
analysis based on Monte Carlo. On the basis of establishing the mathematical model and setting the probability
density function of the input parameter value, the probability density function of the output value is determined
by probability distribution random sampling. Through the calculation and analysis of the transient thermal
balance equation and the steady- state thermal balance equation, the steady-state current carrying capacity, the
transient current carrying capacity, the standard uncertainty and the probability distribution of the minimum
and maximum values of the conductor under 95% confidence interval are obtained. The simulation results
indicate that Monte Carlo method can decrease the computational complexity, speed up the calculation, and
increase the validity and reliability of the uncertainty evaluation.
Surveillance cameras have installed in many places because security and safety is becoming important in
modern society. Through surveillance cameras installed, we can deal with troubles and prevent accidents.
However, watching surveillance videos and judging the accidental situations is very labor-intensive. So now,
the need for research to analyze surveillance videos is growing. This study proposes an algorithm to track
multiple persons using SURF and background subtraction. While the SURF algorithm, as a person-tracking
algorithm, is robust to scaling, rotating and different viewpoints, SURF makes tracking errors with sudden
changes in videos. To resolve such tracking errors, we combined SURF with a background subtraction
algorithm and showed that the proposed approach increased the tracking accuracy. In addition, the background
subtraction algorithm can detect persons in videos, and SURF can initialize tracking targets with these detected
persons, and thus the proposed algorithm can automatically detect the enter/exit of persons.
For clustering large-scale data, which cannot be loaded into memory entirely, incremental clustering algorithms
are very popular. Usually, these algorithms only concern the within-cluster compactness and ignore the
between-cluster separation. In this paper, we propose two incremental fuzzy compactness and separation (FCS)
clustering algorithms, Single-Pass FCS (SPFCS) and Online FCS (OFCS), based on a fuzzy scatter matrix.
Firstly, we introduce two incremental clustering methods called single-pass and online fuzzy C-means
algorithms. Then, we combine these two methods separately with the weighted fuzzy C-means algorithm, so
that they can be applied to the FCS algorithm. Afterwards, we optimize the within-cluster matrix and betweencluster
matrix simultaneously to obtain the minimum within-cluster distance and maximum between-cluster
distance. Finally, large-scale datasets can be well clustered within limited memory. We implemented experiments
on some artificial datasets and real datasets separately. And experimental results show that, compared with
SPFCM and OFCM, our SPFCS and OFCS are more robust to the value of fuzzy index m and noise.
This paper presents an optimal implementation of a Daubechies-based pipelined discrete wavelet packet
transform (DWPT) processor using finite impulse response (FIR) filter banks. The feed-forward pipelined (FFP)
architecture is exploited for implementation of the DWPT on the field-programmable gate array (FPGA). The
proposed DWPT is based on an efficient transpose form structure, thereby reducing its computational complexity
by half of the system. Moreover, the efficiency of the design is further improved by using a canonical-signed
digit-based binary expression (CSDBE) and advanced functional sharing (AFS) methods. In this work, the AFS
technique is proposed to optimize the convolution of FIR filter banks for DWPT decomposition, which reduces
the hardware resource utilization by not requiring any embedded digital signal processing (DSP) blocks. The
proposed AFS and CSDBE-based DWPT system is embedded on the Virtex-7 FPGA board for testing. The
proposed design is implemented as an intellectual property (IP) logic core that can easily be integrated into DSP
systems for sub-band analysis. The achieved results conclude that the proposed method is very efficient in
improving hardware resource utilization while maintaining accuracy of the result of DWPT.
The transmission capacity of transmission lines is affected by environmental parameters such as ambient
temperature, wind speed, wind direction and so on. The environmental parameters can be measured by the
installed measuring devices. However, it is impossible to install the environmental measuring devices
throughout the line, especially considering economic cost of power grid. Taking into account the limited
number of measuring devices and the distribution characteristics of environment parameters and transmission
lines, this paper first studies the environmental parameter estimating method of inverse distance weighted
interpolation and ordinary Kriging interpolation. Dynamic thermal rating of transmission lines based on IEEE
standard and CIGRE standard thermal equivalent equation is researched and the key parameters that affect the
load capacity of overhead lines is identified. Finally, the distributed thermal rating of transmission line is
realized by using the data obtained from China meteorological data network. The cost of the environmental
measurement device is reduced, and the accuracy of dynamic rating is improved.
Three-dimensional (3D) human pose reconstruction from single-view image is a difficult and challenging topic.
Existing approaches mostly process frame-by-frame independently while inter-frames are highly correlated in
a sequence. In contrast, we introduce a novel spatial-temporal 3D human pose reconstruction framework that
leverages both intra and inter-frame relationships in consecutive 2D pose sequences. Orthogonal matching
pursuit (OMP) algorithm, pre-trained pose-angle limits and temporal models have been implemented. Several
quantitative comparisons between our proposed framework and recent works have been studied on CMU
motion capture dataset and Vietnamese traditional dance sequences. Our framework outperforms others by
10% lower of Euclidean reconstruction error and more robust against Gaussian noise. Additionally, it is also
important to mention that our reconstructed 3D pose sequences are more natural and smoother than others.
Distributed compressed sensing (DCS) states that we can recover the sparse signals from very few linear
measurements. Various studies about DCS have been carried out recently. In many practical applications, there
is no prior information except for standard sparsity on signals. The typical example is the sparse signals have
block-sparse structures whose non-zero coefficients occurring in clusters, while the cluster pattern is usually
unavailable as the prior information. To discuss this issue, a new algorithm, called backtracking-based adaptive
orthogonal matching pursuit for block distributed compressed sensing (DCSBBAOMP), is proposed. In
contrast to existing block methods which consider the single-channel signal reconstruction, the DCSBBAOMP
resorts to the multi-channel signals reconstruction. Moreover, this algorithm is an iterative approach, which
consists of forward selection and backward removal stages in each iteration. An advantage of this method is
that perfect reconstruction performance can be achieved without prior information on the block-sparsity
structure. Numerical experiments are provided to illustrate the desirable performance of the proposed method.
A hybrid kernel function of support vector machine is proposed to improve the classification performance of
power quality disturbances. The kernel function mathematical model of support vector machine directly affects
the classification performance. Different types of kernel functions have different generalization ability and
learning ability. The single kernel function cannot have better ability both in learning and generalization. To
overcome this problem, we propose a hybrid kernel function that is composed of two single kernel functions to
improve both the ability in generation and learning. In simulations, we respectively used the single and multiple
power quality disturbances to test classification performance of support vector machine algorithm with the
proposed hybrid kernel function. Compared with other support vector machine algorithms, the improved
support vector machine algorithm has better performance for the classification of power quality signals with
single and multiple disturbances.
Artificial bee colony algorithm is a strong global search algorithm which exhibits excellent exploration ability.
The conventional ABC algorithm adopts employed bees, onlooker bees and scouts to cooperate with each other.
However, its one dimension and greedy search strategy causes slow convergence speed. To enhance its
performance, in this paper, we abandon the greedy selection method and propose an artificial bee colony
algorithm with special division and intellective search (ABCIS). For the purpose of higher food source research
efficiency, different search strategies are adopted with different employed bees and onlooker bees. Experimental
results on a series of benchmarks algorithms demonstrate its effectiveness.
The Internet of Things (IoT) is one of the main enablers for situation awareness needed in accomplishing smart
cities. IoT devices, especially for monitoring purposes, have stringent timing requirements which may not be
met by cloud computing. This deficiency of cloud computing can be overcome by fog computing for which fog
nodes are placed close to IoT devices. Because of low capabilities of fog nodes compared to cloud data centers,
fog nodes may not be deployed with all the services required by IoT devices. Thus, in this article, we focus on
the issue of fog service placement and present the recent research trends in this issue. Most of the literature on
fog service placement deals with determining an appropriate fog node satisfying the various requirements like
delay from the perspective of one or more service requests. In this article, we aim to effectively place fog services
in accordance with the pre-obtained service demands, which may have been collected during the prior time
interval, instead of on-demand service placement for one or more service requests. The concept of the logical
fog network is newly presented for the sake of the scalability of fog service placement in a large-scale smart city.
The logical fog network is formed in a tree topology rooted at the cloud data center. Based on the logical fog
network, a service placement approach is proposed so that services can be placed on fog nodes in a resourceeffective
SMART home is one of the most popular applications of Internet-of-Things (IoT) technologies, which is
expanding in terms of range of applications. SMART home technology provides convenience at home by
connecting household appliances to a single network, control, and management. However, many general home
appliances do not support the network functions yet; hence, enjoying such convenient technology could be
difficult, and it could be expensive in the beginning to build the framework. In addition, even though products
with SMART home technologies are purchased, the control systems could differ from device to device. Thus,
in this paper, we propose a SMART home framework, called an S-mote that can operate all the IoT functions
in a single application by adding an infrared or radio frequency module to general home appliances. The
proposed framework is analyzed using four types of performance tests by five evaluators. The results of the
experiment show that the SMART home environment was implemented successfully and that it functions
appropriately, without any operational issues, with various home appliances, including the latest IoT devices,
and even those equipped with an infrared or radio frequency module.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)
Quorum-based algorithms are widely used for solving several problems in mobile ad hoc networks (MANET) and wireless sensor networks (WSN). Several quorum-based protocols are proposed for multi-hop ad hoc networks that each one has its pros and cons. Quorum-based protocol (QEC or QPS) is the first study in the asynchronous sleep scheduling protocols. At the time, most of the proposed protocols were non-adaptive ones. But nowadays, adaptive quorum-based protocols have gained increasing attention, because we need protocols which can change their quorum size adaptively with network conditions. In this paper, we first introduce the most popular quorum systems and explain quorum system properties and its performance criteria. Then, we present a comparative and comprehensive survey of the non-adaptive and adaptive quorum-based protocols which are subsequently discussed in depth. We also present the comparison of different quorum systems in terms of the expected quorum overlap size (EQOS) and active ratio. Finally, we summarize the pros and cons of current adaptive and non-adaptive quorum-based protocols.
The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.
Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one- directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so- called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.
Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.
This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)