The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[April 23, 2019] We announced The 2nd JIPS Survey Paper Awards. Please refer to here for details.
[Jan. 23, 2018] Call for papers about JIPS Future Topic Track - Special Section scheduled in 2019 are registered. Please refer to here for details.
[Nov. 16, 2018] JIPS committee has made a decision for the article processing charge (APC), thus the new
policy applies to all published papers after January 1, 2019. For more information, click here.
Journal of Information Processing Systems, Vol. 15, No.4, 2019
Smart systems and services aim to facilitate growing urban populations and their prospects of virtual-real social
behaviors, gig economies, factory automation, knowledge-based workforce, integrated societies, modern living,
among many more. To satisfy these objectives, smart systems and services must comprises of a complex set of
features such as security, ease of use and user friendliness, manageability, scalability, adaptivity, intelligent
behavior, and personalization. Recently, artificial intelligence (AI) is realized as a data-driven technology to
provide an efficient knowledge representation, semantic modeling, and can support a cognitive behavior aspect
of the system. In this paper, an integration of AI with the smart systems and services is presented to mitigate
the existing challenges. Several novel researches work in terms of frameworks, architectures, paradigms, and
algorithms are discussed to provide possible solutions against the existing challenges in the AI-based smart
systems and services. Such novel research works involve efficient shape image retrieval, speech signal
processing, dynamic thermal rating, advanced persistent threat tactics, user authentication, and so on.
Dynamic thermal rating technology can effectively improve the thermal load capacity of transmission lines.
However, its availability is limited by the quantity and high cost of the hardware facilities. This paper proposes
a new dynamic thermal rating technology based on global/regional assimilation and prediction system
(GRAPES) and geographic information system (GIS). The paper will also explore the method of obtaining any
point meteorological data along the transmission line by using GRAPES and GIS, and provide the strategy of
extracting and decoding meteorological data. In this paper, the accuracy of numerical weather prediction was
verified from the perspective of time and space. Also, the 750-kV transmission line in Shaanxi Province is
considered as an example to analyze. The results of the study indicate that dynamic thermal rating based on
GRAPES and GIS can fully excavate the line power potential without additional cost on hardware, which saves
a lot of investment.
Shape description is an important and fundamental issue in content-based image retrieval (CBIR), and a
number of shape description methods have been reported in the literature. For shape description, both global
information and local contour variations play important roles. In this paper a new included-angular ternary
pattern (IATP) based shape descriptor is proposed for shape image retrieval. For each point on the shape
contour, IATP is derived from its neighbor points, and IATP has good properties for shape description. IATP
is intrinsically invariant to rotation, translation and scaling. To enhance the description capability, multiscale
IATP histogram is presented to describe both local and global information of shape. Then multiscale IATP
histogram is combined with included-angular histogram for efficient shape retrieval. In the matching stage,
cosine distance is used to measure shape features’ similarity. Image retrieval experiments are conducted on the
standard MPEG-7 shape database and Swedish leaf database. And the shape image retrieval performance of the
proposed method is compared with other shape descriptors using the standard evaluation method. The
experimental results of shape retrieval indicate that the proposed method reaches higher precision at the same
recall value compared with other description method.
In order to deal with the filtering delay problem of least mean square adaptive filter noise reduction algorithm
and music noise problem of spectral subtraction algorithm during the speech signal processing, we combine
these two algorithms and propose one novel noise reduction method, showing a strong performance on par or
even better than state of the art methods. We first use the least mean square algorithm to reduce the average
intensity of noise, and then add spectral subtraction algorithm to reduce remaining noise again. Experiments
prove that using the spectral subtraction again after the least mean square adaptive filter algorithm overcomes
shortcomings which come from the former two algorithms. Also the novel method increases the signal-to-noise
ratio of original speech data and improves the final noise reduction performance.
The smart city is one of the most promising, prominent, and challenging applications of the Internet of Things
(IoT). Smart cities rely on everything connected to each other. This in turn depends heavily on technology.
Technology literacy is essential to transform a city into a smart, connected, sustainable, and resilient city where
information is not only available but can also be found. The smart city vision combines emerging technologies
such as edge computing, blockchain, artificial intelligence, etc. to create a sustainable ecosystem by dramatically
reducing latency, bandwidth usage, and power consumption of smart devices running various applications. In
this research, we present a comprehensive survey of emerging technologies for a sustainable smart city network.
We discuss the requirements and challenges for a sustainable network and the role of heterogeneous integrated
technologies in providing smart city solutions. We also discuss different network architectures from a security
perspective to create an ecosystem. Finally, we discuss the open issues and challenges of the smart city network
and provide suitable recommendations to resolve them.
Due to the view point, illumination, personal gait and other background situation, person re-identification
across cameras has been a challenging task in video surveillance area. In order to address the problem, a novel
method called Joint Bayesian across different cameras for person re-identification (JBR) is proposed. Motivated
by the superior measurement ability of Joint Bayesian, a set of Joint Bayesian matrices is obtained by learning
with different camera pairs. With the global Joint Bayesian matrix, the proposed method combines the
characteristics of multi-camera shooting and person re-identification. Then this method can improve the
calculation precision of the similarity between two individuals by learning the transition between two cameras.
For investigating the proposed method, it is implemented on two compare large-scale re-ID datasets, the
Market-1501 and DukeMTMC-reID. The RANK-1 accuracy significantly increases about 3% and 4%, and the
maximum a posterior (MAP) improves about 1% and 4%, respectively.
Internet of Things (IoT) is the paradigm of network of Internet-connected things as objects that constantly
sense the physical world and share the data for further processing. At the core of IoT lies the early technology
of radio frequency identification (RFID), which provides accurate location tracking of real-world objects. With
its small size and convenience, RFID tags can be attached to everyday items such as books, clothes, furniture
and the like as well as to animals, plants, and even humans. This phenomenon is the beginning of new
applications and services for the industry and consumer market. IoT is regarded as a fourth industrial
revolution because of its massive coverage of services around the world from smart homes to artificial
intelligence-enabled smart driving cars, Internet-enabled medical equipment, etc. It is estimated that there will
be several dozens of billions of IoT devices ready and operating until 2020 around the world. Despite the
growing statistics, however, IoT has security vulnerabilities that must be addressed appropriately to avoid
causing damage in the future. As such, we mention some fields of study as a future topic at the end of the survey.
Consequently, in this comprehensive survey of IoT, we will cover the architecture of IoT with various layered
models, security characteristics, potential applications, and related supporting technologies of IoT such as 5G,
MEC, cloud, WSN, etc., including the economic perspective of IoT and its future directions.
Estimation of accurate blood volume flow in ultrasound Doppler blood flow spectrograms is extremely
important for clinical diagnostic purposes. Blood volume flow measurements require the assessment of both
the velocity distribution and the cross-sectional area of the vessel. Unfortunately, the existing volume flow
estimation algorithms by ultrasound lack the velocity space distribution information in cross-sections of a
vessel and have the problems of low accuracy and poor stability. In this paper, a new robust ultrasound volume
flow estimation method based on multigate (RMG) is proposed and the multigate technology provides detail
information on the local velocity distribution. In this method, an accurate double iterative flow velocity
estimation algorithm (DIV) is used to estimate the mean velocity and it has been tested on in vivo data from
carotid. The results from experiments indicate a mean standard deviation of less than 6% in flow velocities
when estimated for a range of SNR levels. The RMG method is validated in a custom-designed experimental
setup, Doppler phantom and imitation blood flow control system. In vitro experimental results show that the
mean error of the RMG algorithm is 4.81%. Low errors in blood volume flow estimation make the prospect of
using the RMG algorithm for real-time blood volume flow estimation possible.
In real time applications, due to their effective cost and small size, wireless networks play an important role in
receiving particular data and transmitting it to a base station for analysis, a process that can be easily deployed.
Due to various internal and external factors, networks can change dynamically, which impacts the localisation
of nodes, delays, routing mechanisms, geographical coverage, cross-layer design, the quality of links, fault
detection, and quality of service, among others. Conventional methods were programmed, for static networks
which made it difficult for networks to respond dynamically. Here, machine learning strategies can be applied
for dynamic networks effecting self-learning and developing tools to react quickly and efficiently, with less
human intervention and reprogramming. In this paper, we present a wireless networks survey based on
different machine learning algorithms and network lifetime parameters, and include the advantages and
drawbacks of such a system. Furthermore, we present learning algorithms and techniques for congestion,
synchronisation, energy harvesting, and for scheduling mobile sinks. Finally, we present a statistical evaluation
of the survey, the motive for choosing specific techniques to deal with wireless network problems, and a brief
discussion on the challenges inherent in this area of research.
Microblogging services (such as Twitter) are the representative information communication networks during
the Web 2.0 era, which have gained remarkable popularity. Weibo has become a popular platform for
information dissemination in online social networks due to its large number of users. In this study, a microblog
information dissemination model is presented. Related concepts are introduced and analyzed based on the
dynamic model of infectious disease, and new influencing factors are proposed to improve the susceptibleinfective-
removal (SIR) information dissemination model. Correlation analysis is conducted on the existing
information dissemination risk and the rumor dissemination model of microblog. In this study, web hyper is
used to model rumor dissemination. Finally, the experimental results illustrate the effectiveness of the method
in reducing the rumor dissemination of microblogs.
The need for cyber resilience is increasingly important in our technology-dependent society where computing
devices and data have been, and will continue to be, the target of cyber-attackers, particularly advanced
persistent threat (APT) and nation-state/sponsored actors. APT and nation-state/sponsored actors tend to be
more sophisticated, having access to significantly more resources and time to facilitate their attacks, which in
most cases are not financially driven (unlike typical cyber-criminals). For example, such threat actors often
utilize a broad range of attack vectors, cyber and/or physical, and constantly evolve their attack tactics. Thus,
having up-to-date and detailed information of APT’s tactics, techniques, and procedures (TTPs) facilitates the
design of effective defense strategies as the focus of this paper. Specifically, we posit the importance of
taxonomies in categorizing cyber-attacks. Note, however, that existing information about APT attack
campaigns is fragmented across practitioner, government (including intelligence/classified), and academic
publications, and existing taxonomies generally have a narrow scope (e.g., to a limited number of APT
campaigns). Therefore, in this paper, we leverage the Cyber Kill Chain (CKC) model to “decompose” any
complex attack and identify the relevant characteristics of such attacks. We then comprehensively analyze more
than 40 APT campaigns disclosed before 2018 to build our taxonomy. Such taxonomy can facilitate incident
response and cyber threat hunting by aiding in understanding of the potential attacks to organizations as well
as which attacks may surface. In addition, the taxonomy can allow national security and intelligence agencies
and businesses to share their analysis of ongoing, sensitive APT campaigns without the need to disclose detailed
information about the campaigns. It can also notify future security policies and mitigation strategy formulation.
The single carrier frequency domain equalization (SC-FDE) technology is an important part of the broadband
wireless access communication system, which can effectively combat the frequency selective fading in the
wireless channel. In SC-FDE communication system, the accuracy of timing synchronization directly affects
the performance of the SC-FDE system. In this paper, on the basis of Schmidl timing synchronization
algorithm a timing synchronization algorithm suitable for FPGA (field programmable gate array) implementation
is proposed. In the FPGA implementation of the timing synchronization algorithm, the sliding window
accumulation, quantization processing and amplitude reduction techniques are adopted to reduce the
complexity in the implementation of FPGA. The simulation results show that the algorithm can effectively
realize the timing synchronization function under the condition of reducing computational complexity and
Many security systems rely solely on solutions based on Artificial Intelligence, which are weak in nature. These
security solutions can be easily manipulated by malicious users who can gain unlawful access. Some security
systems suggest using fingerprint-based solutions, but they can be easily deceived by copying fingerprints with
clay. Image-based security is undoubtedly easy to manipulate, but it is also a solution that does not require any
special training on the part of the user. In this paper, we propose a multi-factor security framework that operates
in a three-step process to authenticate the user. The motivation of the research lies in utilizing commonly
available and inexpensive devices such as onsite CCTV cameras and smartphone camera and providing fully
secure user authentication. We have used technologies such as Argon2 for hashing image features and physically
unclonable identification for secure device-server communication. We also discuss the methodological workflow
of the proposed multi-factor authentication framework. In addition, we present the service scenario of the
proposed model. Finally, we analyze qualitatively the proposed model and compare it with state-of-the-art
methods to evaluate the usability of the model in real-world applications.
To figure out the impact of debt financing on the profits of industrial enterprises, it starts with calculating the
first differences against the logarithms of the cost profit ratios and the debt asset ratios of Chinese industrial
enterprises during 179 months from 2002 to 2016; next, it runs the cointegration test and afterwards the
regression test to analyze the obtained first differences, and still next uses the Simulink software to get the
regularity of those changes. It finds out that there is not only a long-term stable relationship between the
enterprises’ profits and debts, but also a steady time series trend within a short term. The profit rate positively
correlates to the debt asset ratio, and profit for the current term positively correlates to the profit for the
previous term. It indicates that properly raised debts can help increase the profit rate of the industrial
enterprises, and a higher previous profit level can help improve the current profit level.
Documents contain information that can be used for various applications, such as question answering (QA)
system, information retrieval (IR) system, and recommendation system. To use the information, it is necessary
to develop a method of extracting such information from the documents written in a form of natural language.
There are several kinds of the information (e.g., temporal information, spatial information, semantic role
information), where different kinds of information will be extracted with different methods. In this paper, the
existing studies about the methods of extracting the temporal information are reported and several related
issues are discussed. The issues are about the task boundary of the temporal information extraction, the history
of the annotation languages and shared tasks, the research issues, the applications using the temporal
information, and evaluation metrics. Although the history of the tasks of temporal information extraction is
not long, there have been many studies that tried various methods. This paper gives which approach is known
to be the better way of extracting a particular part of the temporal information, and also provides a future
This paper studies a novel approach to natural gait cycles based gait recognition via kernel Fisher discriminant
analysis (KFDA), which can effectively calculate the features from gait sequences and accelerate the recognition
process. The proposed approach firstly extracts the gait silhouettes through moving object detection and
segmentation from each gait videos. Secondly, gait energy images (GEIs) are calculated for each gait videos, and
used as gait features. Thirdly, KFDA method is used to refine the extracted gait features, and low-dimensional
feature vectors for each gait videos can be got. The last is the nearest neighbor classifier is applied to classify.
The proposed method is evaluated on the CASIA and USF gait databases, and the results show that our
proposed algorithm can get better recognition effect than other existing algorithms.
Bug report processing is a key element of bug fixing in modern software maintenance. Bug reports are not
processed immediately after submission and involve several processes such as bug report deduplication and
bug report triage before bug fixing is initiated; however, this method of bug fixing is very inefficient because all
these processes are performed manually. Software engineers have persistently highlighted the need to automate
these processes, and as a result, many automation techniques have been proposed for bug report processing;
however, the accuracy of the existing methods is not satisfactory. Therefore, this study focuses on surveying to
improve the accuracy of existing techniques for bug report processing. Reviews of each method proposed in
this study consist of a description, used techniques, experiments, and comparison results. The results of this
study indicate that research in the field of bug deduplication still lacks and therefore requires numerous studies
that integrate clustering and natural language processing. This study further indicates that although all studies
in the field of triage are based on machine learning, results of studies on deep learning are still insufficient.
Mobile user interface pattern (MUIP) is a kind of structured representation of interaction design knowledge.
Several studies have suggested that MUIPs are a proven solution for recurring mobile interface design problems.
To facilitate MUIP selection, an effective clustering method is required to discover hidden knowledge of pattern
data set. In this paper, we employ the semi-supervised kernel fuzzy c-means clustering (SSKFCM) method to
cluster MUIP data. In order to improve the performance of clustering, clustering parameters are optimized by
utilizing the global optimization capability of particle swarm optimization (PSO) algorithm. Since the PSO
algorithm is easily trapped in local optima, a novel PSO algorithm is presented in this paper. It combines an
improved intuitionistic fuzzy entropy measure and a new population search strategy to enhance the population
search capability and accelerate the convergence speed. Experimental results show the effectiveness and
superiority of the proposed clustering method.
Pedestrian tracking is a particular object tracking problem and an important component in various visionbased
applications, such as autonomous cars and surveillance systems. Following several years of development,
pedestrian tracking in videos remains challenging, owing to the diversity of object appearances and surrounding
environments. In this research, we proposed a tracking-by-detection system for pedestrian tracking, which
incorporates a convolutional neural network (CNN) and color information. Pedestrians in video frames are
localized using a CNN-based algorithm, and then detected pedestrians are assigned to their corresponding
tracklets based on similarities between color distributions. The experimental results show that our system is
able to overcome various difficulties to produce highly accurate tracking results.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)
Quorum-based algorithms are widely used for solving several problems in mobile ad hoc networks (MANET) and wireless sensor networks (WSN). Several quorum-based protocols are proposed for multi-hop ad hoc networks that each one has its pros and cons. Quorum-based protocol (QEC or QPS) is the first study in the asynchronous sleep scheduling protocols. At the time, most of the proposed protocols were non-adaptive ones. But nowadays, adaptive quorum-based protocols have gained increasing attention, because we need protocols which can change their quorum size adaptively with network conditions. In this paper, we first introduce the most popular quorum systems and explain quorum system properties and its performance criteria. Then, we present a comparative and comprehensive survey of the non-adaptive and adaptive quorum-based protocols which are subsequently discussed in depth. We also present the comparison of different quorum systems in terms of the expected quorum overlap size (EQOS) and active ratio. Finally, we summarize the pros and cons of current adaptive and non-adaptive quorum-based protocols.
The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.
Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one- directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so- called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.
Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.
This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.
The 2nd Journal of Information Processing Systems Awards
"Block-VN: A Distributed Blockchain Based Vehicular Network Architecture in Smart City"
Pradip Kumar Sharma, Seo Yeon Moon and Jong Hyuk Park (Seoul National University of Science and Technology, Korea)
Publication (Corresponding Author)
Chengyou Wang (Shangdong University, China)