The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[April 23, 2019] We announced The 2nd JIPS Survey Paper Awards. Please refer to here for details.
[Jan. 23, 2018] Call for papers about JIPS Future Topic Track - Special Section scheduled in 2019 are registered. Please refer to here for details.
[Nov. 16, 2018] JIPS committee has made a decision for the article processing charge (APC), thus the new
policy applies to all published papers after January 1, 2019. For more information, click here.
Journal of Information Processing Systems, Vol. 15, No.3, 2019
The blockchain and crypto currency has become one of the most essential components of a communication
network in the recent years. Through communication networking, we browse the internet, make VoIP phone
calls, have video conferences and check e-mails via computers. A lot of researches are being conducting to
address the blockchain and crypto currency challenges in communication networking and provide
corresponding solutions. In this paper, a diverse kind of novel research works in terms of mechanisms,
techniques, architectures, and frameworks have been proposed to provide possible solutions against the existing
challenges in the communication networking. Such novel research works involve thermal load capacity
techniques, intelligent sensing mechanism, secure cloud computing system communication algorithm for
wearable healthcare systems, sentiment analysis, optimized resources.
With the sustained and rapid development of new energy sources, the demand for electric energy is increasing
day by day. However, China’s energy distribution is not balanced, and the construction of transmission lines is
in a serious lag behind the improvement of generating capacity. So there is an urgent need to increase the
utilization of transmission capacity. The transmission capacity is mainly limited by the maximum allowable
operating temperature of conductor. At present, the evaluation of transmission capacity mostly adopts the static
thermal rating (STR) method under severe environment. Dynamic thermal rating (DTR) technique can
improve the utilization of transmission capacity to a certain extent. In this paper, the meteorological parameters
affecting the conductor temperature are analyzed with the IEEE standard thermal equivalent equation of
overhead transmission lines, and the real load capacity of 220 kV transmission line is calculated with 7-year
actual meteorological data in Weihai. Finally, the thermal load capacity of DTR relative to STR under given
confidence is analyzed. By identifying the key parameters that affect the thermal rating and analyzing the
relevant environmental parameters that affect the conductor temperature, this paper provides a theoretical basis
for the wind power grid integration and grid intelligence. The results show that the thermal load potential of
transmission lines can be effectively excavated by DTR, which provides a theoretical basis for improving the
absorptive capacity of power grid.
This work develops a monitoring system for the population with health concerns. A belt integrated with an onbody
circuit and sensors measures a wearer’s selected vital signals. The electrocardiogram sensors monitor heart
conditions and an accelerometer assesses the level of physical activity. Sensed signals are transmitted to the
circuit module through digital yarns and are forwarded to a mobile device via Bluetooth. An interactive
application, installed on the mobile device, is used to process the received signals and provide users with realtime
feedback about their status. Persuasive functions are designed and implemented in the interactive
application to encourage users’ physical activity. Two signal processing algorithms are developed to analyze the
data regarding heart and activity. A user study is conducted to evaluate the performance and usability of the
HEVC is the high efficiency video coding standard, which provides better coding efficiency contrasted with the
other video coding standard. But at the same time the computational complexity increases drastically. Thirtyfive
kinds of intra-prediction modes are defined in HEVC, while 9 kinds of intra prediction modes are defined
in H.264/AVC. This paper proposes a fast rough mode decision (RMD) algorithm which adopts the smoothness
of the up-reference pixels and the left-reference pixels to decrease the computational complexity. The three step
search method is implemented in RMD process. The experimental results compared with HM13.0 indicate that
the proposed algorithm can save 39.7% of the encoding time, while Bjontegaard delta bitrate (BDBR) is
increased slightly by 1.35% and Bjontegaard delta peak signal-to-noise ratio (BDPSNR) loss is negligible.
The blooming of social media has simulated interest in sentiment analysis. Sentiment analysis aims to
determine from a specific piece of content the overall attitude of its author in relation to a specific item, product,
brand, or service. In sentiment analysis, the focus is on the subjective sentences. Hence, in order to discover
and extract the subjective information from a given text, researchers have applied various methods in
computational linguistics, natural language processing, and text analysis. The aim of this paper is to provide an
in-depth up-to-date study of the sentiment analysis algorithms in order to familiarize with other works done in
the subject. The paper focuses on the main tasks and applications of sentiment analysis. State-of-the-art
algorithms, methodologies and techniques have been categorized and summarized to facilitate future research
in this field.
This paper proposes a system that can detect the data leakage pattern using a convolutional neural network
based on defining the behaviors of leaking data. In this case, the leakage detection scenario of data leakage is
composed of the patterns of occurrence of security logs by administration and related patterns between the
security logs that are analyzed by association relationship analysis. This proposed system then detects whether
the data is leaked through the convolutional neural network using an insider malicious behavior graph. Since
each graph is drawn according to the leakage detection scenario of a data leakage, the system can identify the
criminal insider along with the source of malicious behavior according to the results of the convolutional neural
network. The results of the performance experiment using a virtual scenario show that even if a new malicious
pattern that has not been previously defined is inputted into the data leakage detection system, it is possible to
determine whether the data has been leaked. In addition, as compared with other data leakage detection
systems, it can be seen that the proposed system is able to detect data leakage more flexibly.
Cloud computing is the concept of providing information technology services on the Internet, such as software,
hardware, networking, and storage. These services can be accessed anywhere at any time on a pay-per-use basis.
However, storing data on servers is a challenging aspect of cloud computing. This paper utilizes cryptography
and access control to ensure the confidentiality, integrity, and proper control of access to sensitive data. We
propose a model that can protect data in cloud computing. Our model is designed by using an enhanced RSA
encryption algorithm and a combination of role-based access control model with extensible access control
markup language (XACML) to facilitate security and allow data access. This paper proposes a model that uses
cryptography concepts to store data in cloud computing and allows data access through the access control
model with minimum time and cost for encryption and decryption.
We propose approaches for improving Bloom filter in terms of false positive probability and membership query
speed. To reduce the false positive probability, we propose special type of additional Bloom filters that are used
to handle false positives caused by the original Bloom filter. Implementing the proposed approach for a routing
table lookup, we show that our approach reduces the routing table lookup time by up to 28% compared to the
original Bloom filter by handling most false positives within the fast memory. We also introduce an approach
for improving the membership query speed. Taking the hash table-like approach while storing only values, the
proposed approach shows much faster membership query speed than the original Bloom filter (e.g., 34 times
faster with 10 subsets). Even compared to a hash table, our approach reduces the routing table lookup time by
up to 58%.
The Event-B design pattern is an excellent way to quickly develop a formal model of the system. Researchers
have proposed a number of Event-B design patterns, but they all lack formal behavior semantics. This makes
the analysis, verification, and simulation of the behavior of the Event-B model very difficult, especially for the
control-intensive systems. In this paper, we propose a novel method to transform the Event-B synchronous
control flow design pattern into the labeled transition system (LTS) behavior model. Then we map the design
pattern instantiation process of Event-B to the instantiation process of LTS model and get the LTS behavior
semantic model of Event-B model of a multi-level complex control system. Finally, we verify the linear temporal
logic behavior properties of the LTS model. The experimental results show that the analysis and simulation of
system behavior become easier and the verification of the behavior properties of the system become convenient
after the Event-B model is converted to the LTS model.
In wearable healthcare systems, sensor devices can be deployed in places around the human body such as the
stomach, back, arms, and legs. The sensors use tiny batteries, which have limited resources, and old sensor
batteries must be replaced with new batteries. It is difficult to deploy sensor devices directly into the human
body. Therefore, instead of replacing sensor batteries, increasing the lifetime of sensor devices is more efficient.
A transmission power control (TPC) algorithm is a representative technique to increase the lifetime of sensor
devices. Sensor devices using a TPC algorithm control their transmission power level (TPL) to reduce battery
energy consumption. The TPC algorithm operates on a closed-loop mechanism that consists of two parts, such
as sensor and sink devices. Most previous research considered only the sink part of devices in the closed-loop.
If we consider both the sensor and sink parts of a closed-loop mechanism, sensor devices reduce energy
consumption more than previous systems that only consider the sensor part. In this paper, we propose a new
approach to consider both the sensor and sink as part of a closed-loop mechanism for efficient energy
management of sensor devices. Our proposed approach judges the current channel condition based on the
values of various body sensors. If the current channel is not optimal, sensor devices maintain their current TPL
without communication to save the sensor’s batteries. Otherwise, they find an optimal TPL. To compare
performance with other TPC algorithms, we implemented a TPC algorithm and embedded it into sensor
devices. Our experimental results show that our new algorithm is better than other TPC algorithms, such as
linear, binary, hybrid, and ATPC.
Single-user spectrum sensing is susceptible to multipath effects, shadow effects, hidden terminals and other
unfavorable factors, leading to misjudgment of perceived results. In order to increase the detection accuracy
and reduce spectrum sensing cost, we propose an adaptive cooperative sensing strategy based on an estimated
signal-to-noise ratio (SNR). Which can adaptive select different sensing strategy during the local sensing phase.
When the estimated SNR is higher than the selection threshold, adaptive double threshold energy detector (ED)
is implemented, otherwise cyclostationary feature detector is performed. Due to the fact that only a better
sensing strategy is implemented in a period, the detection accuracy is improved under the condition of low SNR
with low complexity. The local sensing node transmits the perceived results through the control channel to the
fusion center (FC), and uses voting rule to make the hard decision. Thus the transmission bandwidth is
effectively saved. Simulation results show that the proposed scheme can effectively improve the system
detection probability, shorten the average sensing time, and has better robustness without largely increasing
the costs of sensing system.
With the rapid increase of information on the World Wide Web, finding useful information on the internet has
become a major problem. The recommendation system helps users make decisions in complex data areas where
the amount of data available is large. There are many methods that have been proposed in the recommender
system. Collaborative filtering is a popular method widely used in the recommendation system. However,
collaborative filtering methods still have some problems, namely cold-start problem. In this paper, we propose
a movie recommendation system by using social network analysis and collaborative filtering to solve this
problem associated with collaborative filtering methods. We applied personal propensity of users such as age,
gender, and occupation to make relationship matrix between users, and the relationship matrix is applied to
cluster user by using community detection based on edge betweenness centrality. Then the recommended
system will suggest movies which were previously interested by users in the group to new users. We show shown
that the proposed method is a very efficient method using mean absolute error.
The remote monitoring and warning system for dangerous chemicals is designed with the concept of the Cyber-
Physical System (CPS) in this paper. The real-time perception, dynamic control, and information service of
major hazards chemicals are realized in this CPS system. The CPS system architecture, the physical layer and
the applacation layer, are designed in this paper. The terminal node is mainly composed of the field collectors
which complete the data acquisition of sensors and video in the physical layers, and the use of application layer
makes CPS system safer and more reliable to monitor the hazardous chemicals. The cloud application layer
completes the risk identification and the prediction of the major hazard sources. The early intelligent warning
of the major dangerous chemicals is realized and the security risk images are given in the cloud application
layer. With the CPS technology, the remote network of hazardous chemicals has been completed, and a major
hazard monitoring and accident warning online system is formed. Through the experiment of the terminal
node, it can be proved that the terminal node can complete the mass data collection and classify. With this
experiment it can be obtained the CPS system is safe and effective. In order to verify feasible, the multi-risk
warning based on CPS is simulated, and results show that the system solves the problem of hazardous chemicals
enterprises safety management.
The purpose of this study was to collect and analyze personal bio data and social network services (SNS) data,
derive user preference and user life pattern, and propose intuitive and precise user modeling. This study not
only tried to conduct eye tracking experiments using various smart devices to be the ground of the
recommendation system considering the attribute of smart devices, but also derived classification preference
by analyzing eye tracking data of collected bio data and SNS data. In addition, this study intended to combine
and analyze preference of the common classification of the two types of data, derive final preference by each
smart device, and based on user life pattern extracted from final preference and collected bio data (amount of
activity, sleep), draw the similarity between users using Pearson correlation coefficient. Through derivation of
preference considering the attribute of smart devices, it could be found that users would be influenced by smart
devices. With user modeling using user behavior pattern, eye tracking, and user preference, this study tried to
contribute to the research on the recommendation system that should precisely reflect user tendency.
Supplier evaluation is of great significance in green supply chain management. Influenced by factors such as
economic globalization, sustainable development, a holistic index framework is difficult to establish in green
supply chain. Furthermore, the initial index values of candidate suppliers are often characterized by uncertainty
and incompleteness and the index weight is variable. To solve these problems, an index framework is established
after comprehensive consideration of the major factors. Then an adaptive weight D-S theory model is put
forward, and a fuzzy-rough-sets-AHP method is proposed to solve the adaptive weight in the index framework.
The case study and the comparison with TOPSIS show that the adaptive weight D-S theory model in this paper
is feasible and effective.
Orthogonal frequency division multiplexing (OFDM) is a system which is used to encode data using multiple
carriers instead of the traditional single carrier system. This method improves the spectral efficiency (optimum
use of bandwidth). It also lessens the effect of fading and intersymbol interference (ISI). In 1995, digital audio
broadcast (DAB) adopted OFDM as the first standard using OFDM. Later in 1997, it was adopted for digital
video broadcast (DVB). Currently, it has been adopted for WiMAX and LTE standards. In this project, a Verilog
design is employed to implement an OFDM transmitter (DAC block) and receiver (FFT and ADC block).
Generally, OFDM uses FFT and IFFT for modulation and demodulation. In this paper, 16-point FFT
decimation-in-frequency (DIF) with the radix-2 algorithm and direct summation method have been analyzed.
ADC and DAC in OFDM are used for conversion of the signal from analog to digital or vice-versa has also been
analyzed. All the designs are simulated using Verilog on ModelSim simulator. The result generated from the
FFT block after Verilog simulation has also been verified with MATLAB.
The traditional classification methods mostly assume that the data for class distribution is balanced, while
imbalanced data is widely found in the real world. So it is important to solve the problem of classification with
imbalanced data. In Mahalanobis-Taguchi system (MTS) algorithm, data classification model is constructed
with the reference space and measurement reference scale which is come from a single normal group, and thus
it is suitable to handle the imbalanced data problem. In this paper, an improved method of MTS-CBPSO is
constructed by introducing the chaotic mapping and binary particle swarm optimization algorithm instead of
orthogonal array and signal-to-noise ratio (SNR) to select the valid variables, in which G-means, F-measure,
dimensionality reduction are regarded as the classification optimization target. This proposed method is also
applied to the financial distress prediction of Chinese listed companies. Compared with the traditional MTS
and the common classification methods such as SVM, C4.5, k-NN, it is showed that the MTS-CBPSO method
has better result of prediction accuracy and dimensionality reduction.
In this study, we applied the long short-term memory (LSTM) model to classify the cryptocurrency price time
series. We collected historic cryptocurrency price time series data and preprocessed them in order to make
them clean for use as train and target data. After such preprocessing, the price time series data were
systematically encoded into the three-dimensional price tensor representing the past price changes of
cryptocurrencies. We also presented our LSTM model structure as well as how to use such price tensor as input
data of the LSTM model. In particular, a grid search-based k-fold cross-validation technique was applied to find
the most suitable LSTM model parameters. Lastly, through the comparison of the f1-score values, our study
showed that the LSTM model outperforms the gradient boosting model, a general machine learning model
known to have relatively good prediction performance, for the time series classification of the cryptocurrency
price trend. With the LSTM model, we got a performance improvement of about 7% compared to using the GB
In this paper, we present a certificate management platform for performance assessment during recruitment
using blockchain. Applicants are awarded certificates according to a predetermined level of progress based on
their performances. All certificates are stored on a recruitment management platform that serves as an
environment for storing and presenting all awarded certificates. The hashed information of all the certificates
are stored in the blockchain, and once stored, the contents cannot be tampered with. Therefore, anyone can
check the validity of the certificates using this blockchain. Our proposed platform will be useful for recruitment
and application management, career management, and personal history maintenance.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)
Quorum-based algorithms are widely used for solving several problems in mobile ad hoc networks (MANET) and wireless sensor networks (WSN). Several quorum-based protocols are proposed for multi-hop ad hoc networks that each one has its pros and cons. Quorum-based protocol (QEC or QPS) is the first study in the asynchronous sleep scheduling protocols. At the time, most of the proposed protocols were non-adaptive ones. But nowadays, adaptive quorum-based protocols have gained increasing attention, because we need protocols which can change their quorum size adaptively with network conditions. In this paper, we first introduce the most popular quorum systems and explain quorum system properties and its performance criteria. Then, we present a comparative and comprehensive survey of the non-adaptive and adaptive quorum-based protocols which are subsequently discussed in depth. We also present the comparison of different quorum systems in terms of the expected quorum overlap size (EQOS) and active ratio. Finally, we summarize the pros and cons of current adaptive and non-adaptive quorum-based protocols.
The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.
Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one- directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so- called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.
Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.
This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)