The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[Nov. 16, 2018] JIPS committee has made a decision for the article processing charge (APC), thus the new
policy applies to all published papers after January 1, 2019. For more information, click here.
[Nov. 06, 2018] Call for papers about JIPS Award scheduled in 2018 are registered. Please refer to here for details.
[Jan. 01, 2018] Since January 01, 2018, the JIPS has started to manage the three manuscript tracks; 1) Regular Track, 2) Fast Track, and 3) Future Topic Track. Please refer to the details on the author information page.
Journal of Information Processing Systems, Vol. 14, No.6, 2018
Artificial intelligence is one of the key technologies of the Fourth Industrial Revolution. This paper introduces the diverse kinds of approaches to subjects that tackle diverse kinds of research fields such as model-based MS approach, deep neural network model, image edge detection approach, cross-layer optimization model, LSSVM approach, screen design approach, CPU-GPU hybrid approach and so on. The research on Superintelligence and superconnection for IoT and big data is also described such as ‘superintelligence-based systems and infrastructures’, ‘superconnection-based IoT and big data systems’, ‘analysis of IoT-based data and big data’, ‘infrastructure design for IoT and big data’, ‘artificial intelligence applications’, and ‘superconnection-based IoT devices’.
To deal with the problems of occlusion, pose variations and illumination changes in the object tracking
system, a regression model weighted multi-templates mean-shift (MS) algorithm is proposed in this paper.
Target templates and occlusion templates are extracted to compose a multi-templates set. Then, the MS
algorithm is applied to the multi-templates set for obtaining the candidate areas. Moreover, a regression
model is trained to estimate the Bhattacharyya coefficients between the templates and candidate areas. Finally,
the geometric center of the tracked areas is considered as the object’s position. The proposed algorithm is
evaluated on several classical videos. The experimental results show that the regression model weighted multitemplates
MS algorithm can track an object accurately in terms of occlusion, illumination changes and pose
Video captioning refers to the process of extracting features from a video and generating video captions using
the extracted features. This paper introduces a deep neural network model and its learning method for
effective video captioning. In this study, visual features as well as semantic features, which effectively express
the video, are also used. The visual features of the video are extracted using convolutional neural networks,
such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction
network proposed in this paper. Further, an attention-based caption generation network is proposed for
effective generation of video captions using the extracted features. The performance and effectiveness of the
proposed model is verified through various experiments using two large-scale video benchmarks such as the
Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).
Aiming at the problem that the gradient-based edge detection operators are sensitive to the noise, causing the
pseudo edges, a triqubit-state measurement-based edge detection algorithm is presented in this paper.
Combing the image local and global structure information, the triqubit superposition states are used to
represent the pixel features, so as to locate the image edge. Our algorithm consists of three steps. Firstly, the
improved partial differential method is used to smooth the defect image. Secondly, the triqubit-state is
characterized by three elements of the pixel saliency, edge statistical characteristics and gray scale contrast to
achieve the defect image from the gray space to the quantum space mapping. Thirdly, the edge image is
outputted according to the quantum measurement, local gradient maximization and neighborhood chain
code searching. Compared with other methods, the simulation experiments indicate that our algorithm has
less pseudo edges and higher edge detection accuracy.
Owing to limited energy in wireless devices power saving is very critical to prolong the lifetime of the
networks. In this regard, we designed a cross-layer optimization mechanism based on power control in which
source node broadcasts a Route Request Packet (RREQ) containing information such as node id, image size,
end to end bit error rate (BER) and residual battery energy to its neighbor nodes to initiate a multimedia
session. Each intermediate node appends its remaining battery energy, link gain, node id and average noise
power to the RREQ packet. Upon receiving the RREQ packets, the sink node finds node disjoint paths and
calculates the optimal power vectors for each disjoint path using cross layer optimization algorithm. Sink
based cross-layer maximal minimal residual energy (MMRE) algorithm finds the number of image packets
that can be sent on each path and sends the Route Reply Packet (RREP) to the source on each disjoint path
which contains the information such as optimal power vector, remaining battery energy vector and number of
packets that can be sent on the path by the source. Simulation results indicate that considerable energy saving
can be accomplished with the proposed cross layer power control algorithm.
Recently, Cyber Physical System (CPS) is one of the core technologies for realizing Internet of Things (IoT).
The CPS is a new paradigm that seeks to converge the physical and cyber worlds in which we live. However,
the CPS suffers from certain CPS issues that could directly threaten our lives, while the CPS environment,
including its various layers, is related to on-the-spot threats, making it necessary to study CPS security.
Therefore, a survey-based in-depth understanding of the vulnerabilities, threats, and attacks is required of
CPS security and privacy for IoT. In this paper, we analyze security issues, threats, and solutions for IoT-CPS,
and evaluate the existing researches. The CPS raises a number challenges through current security markets
and security issues. The study also addresses the CPS vulnerabilities and attacks and derives challenges.
Finally, we recommend solutions for each system of CPS security threats, and discuss ways of resolving
potential future issues.
There are many factors that affect the wind speed. In addition, the randomness of wind speed also leads to low prediction accuracy for wind speed. According to this situation, this paper constructs the short-time forecasting model based on the least squares support vector machines (LSSVM) to forecast the wind speed. The basis of the model used in this paper is support vector regression (SVR), which is used to calculate the regression relationships between the historical data and forecasting data of wind speed. In order to improve the forecast precision, historical data is clustered by cluster analysis so that the historical data whose changing trend is similar with the forecasting data can be filtered out. The filtered historical data is used as the training samples for SVR and the parameters would be optimized by particle swarm optimization (PSO). The forecasting model is tested by actual data and the forecast precision is more accurate than the industry standards. The results prove the feasibility and reliability of the model.
High-performance computing (HPC) provides to researchers a powerful ability to resolve problems with
intensive computations, such as those in the math and medical fields. When an HPC platform is provided as a
service, users may suffer from unexpected obstacles in developing and running applications due to restricted
development environments and dependencies. In this context, operating system level virtualization can be a
solution for HPC service to ensure lightweight virtualization and consistency in Dev-Ops environments.
Therefore, this paper proposes three types of typical HPC structure for container environments built with
HPC container and Docker. The three structures focus on smooth integration with existing HPC job
framework, message passing interface (MPI). Lastly, the performance of the structures is analyzed with High
Performance Linpack benchmark from the aspect of performance degradation in network communications
An image fusion method is proposed on the basis of depth model segmentation to overcome the
shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep
Boltzmann machine is used to perform the priori learning of infrared and visible target and background
contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative
algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then,
the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the
corresponding rules are used to integrate the coefficients in the light of the segmented background contour.
Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of
MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background
contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits
in objective quantitative indicators.
To build a successful information system, design and development should be carried out from the enterprise
perspective. A complicated business is represented in various ways as technology advances, and many
development methodologies have been studied from the viewpoint of technology and development. Each
domain is independently designed and developed from the enterprise perspective, but there would be
inclusive parts due to the integrated process wherein the definition, design, and development of business are
carried out, and the design is done based on the designer's experience. This study would like to address the
technique of designing screens based on the business process of the applications derived from the business. It
designs the screens that appear when actual applications are completed, including how the data transfer
process in the derived business process is represented and operated on the relevant screens. It designs the
screen which is displayed when the actual application is completed and how the data transfer process in the
derived business process is represented and operated on the relevant screen. In addition, it designs the DFD
representing the overall flow of data for each business to represent the movement procedure between screens
in general. Through the design method proposed in this study, the client's requirement could be confirmed to
reduce the cost for redevelopment, the problem of communication between designers and developers with
various experiences could be reduced, and an efficient design procedure could be provided to persons who
lack design experience.
Securing objects in the Internet of Things (IoT) is essential. Authentication model is one candidate to secure
an object, but it is only limited to handle a specific type of attack such as Sybil attack. The authentication
model cannot handle other types of attack such as trust-based attacks. This paper proposed two-phase
security protection for objects in IoT. The proposed method combined authentication and statistical models.
The results showed that the proposed method could handle other attacks in addition to Sybil attacks, such as
bad-mouthing attack, good-mouthing attack, and ballot stuffing attack.
Recently, with the development of Internet technologies and propagation of smart devices, use of microblogs
such as Facebook, Twitter, and Instagram has been rapidly increasing. Many users check for new information
on microblogs because the content on their timelines is continually updating. Therefore, clustering algorithms
are necessary to arrange the content of microblogs by grouping them for a user who wants to get the newest
information. However, microblogs have word limits, and it has there is not enough information to analyze for
content clustering. In this paper, we propose a semantic-based K-means clustering algorithm that not only
measures the similarity between the data represented as a vector space model, but also measures the semantic
similarity between the data by exploiting the TagCluster for clustering. Through the experimental results on
the RepLab2013 Twitter dataset, we show the effectiveness of the semantic-based K-means clustering
Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned
ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding
terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of
environment information analysis, we develop a CPU-GPU hybrid system of automatic environment
perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of
three functional modules, namely, multi-sensor data collection and pre-processing, environment perception,
and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing
function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion
information into a global terrain model after filtering redundant and noise data according to the redundancy
removal principle. In the environment perception module, the registered discrete points are clustered into
ground surface and individual objects by using a ground segmentation method and a connected component
labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed
and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates
the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the
captured video images. Texture meshes and color particle models are used to reconstruct the ground surface
and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel
computation method to implement the applied computer graphics and image processing algorithms in parallel.
In recent times, Natural User Interface/Natural User Experience (NUI/NUX) technology has found
widespread application across a diverse range of fields and is also utilized for controlling unmanned aerial
vehicles (UAVs). Even if the user controls the UAV by utilizing the NUI/NUX technology, it is difficult for
the user to easily control the UAV. The user needs an autopilot to easily control the UAV. The user needs a
flight path to use the autopilot. The user sets the flight path based on the waypoints. UAVs normally fly
straight from one waypoint to another. However, if flight between two waypoints is in a straight line, UAVs
may collide with obstacles. In order to solve collision problems, flight records can be utilized to adjust the
generated path taking the locations of the obstacles into consideration. This paper proposes a natural path
generation method between waypoints based on flight records collected through UAVs flown by users.
Bayesian probability is utilized to select paths most similar to the flight records to connect two waypoints.
These paths are generated by selection of the center path corresponding to the highest Bayesian probability.
While the K-means algorithm-based straight-line method generated paths that led to UAV collisions, the
proposed method generates paths that allow UAVs to avoid obstacles.
In face recognition, sometimes the number of available training samples for single category is insufficient. Therefore, the performances of models trained by convolutional neural network are not ideal. The small sample face recognition algorithm based on novel Siamese network is proposed in this paper, which doesn’t need rich samples for training. The algorithm designs and realizes a new Siamese network model, SiameseFace1, which uses pairs of face images as inputs and maps them to target space so that the L2 norm distance in target space can represent the semantic distance in input space. The mapping is represented by the neural network in supervised learning. Moreover, a more lightweight Siamese network model, SiameseFace2, is designed to reduce the network parameters without losing accuracy. We also present a new method to generate training data and expand the number of training samples for single category in AR and labeled faces in the wild (LFW) datasets, which improves the recognition accuracy of the models. Four loss functions are adopted to carry out experiments on AR and LFW datasets. The results show that the contrastive loss function combined with new Siamese network model in this paper can effectively improve the accuracy of face recognition.
Recently, computational intelligence has received a lot of attention from researchers due to its potential
applications to artificial intelligence. In computer science, computational intelligence refers to a machine’s
ability to learn how to compete various tasks, such as making observations or carrying out experiments. We
adopted a computational intelligence solution to monitoring residual resources in cloud computing environments.
The proposed residual resource monitoring scheme periodically monitors the cloud-based host machines, so
that the post migration performance of a virtual machine is as consistent with the pre-migration performance
as possible. To this end, we use a novel similarity measure to find the best target host to migrate a virtual
machine to. The design of the proposed residual resource monitoring scheme helps maintain the quality of
service and service level agreement during the migration. We carried out a number of experimental evaluations
to demonstrate the effectiveness of the proposed residual resource monitoring scheme. Our results show that
the proposed scheme intelligently measures the similarities between virtual machines in cloud computing
environments without causing performance degradation, whilst preserving the quality of service and service
Intelligent human identification using face information has been the research hotspot ranging from Internet
of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent
access control. Since 2D face images are usually captured from a long distance in an unconstrained environment,
to fully exploit this advantage and make human recognition appropriate for wider intelligent applications
with higher security and convenience, the key difficulties here include gray scale change caused by
illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by
pose or expression variation. To conquer these, many solutions have been proposed. However, most of them
only improve recognition performance under one influence factor, which still cannot meet the real face
recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture
to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted
on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed
algorithm exhibits excellent discriminative ability compared with other existing algorithms.
In this paper, we propose an improved model to provide users with a better long-term prediction of
waterworks operation data. The existing prediction models have been studied in various types of models such
as multiple linear regression model while considering time, days and seasonal characteristics. But the existing
model shows the rate of prediction for demand fluctuation and long-term prediction is insufficient.
Particularly in the deep running model, the long-short-term memory (LSTM) model has been applied to
predict data of water purification plant because its time series prediction is highly reliable. However, it is
necessary to reflect the correlation among various related factors, and a supplementary model is needed to
improve the long-term predictability. In this paper, convolutional neural network (CNN) model is introduced
to select various input variables that have a necessary correlation and to improve long term prediction rate,
thus increasing the prediction rate through the LSTM predictive value and the combined structure. In
addition, a multiple linear regression model is applied to compile the predicted data of CNN and LSTM,
which then confirms the data as the final predicted outcome.
Quorum-based algorithms are widely used for solving several problems in mobile ad hoc networks (MANET) and wireless sensor networks (WSN). Several quorum-based protocols are proposed for multi-hop ad hoc networks that each one has its pros and cons. Quorum-based protocol (QEC or QPS) is the first study in the asynchronous sleep scheduling protocols. At the time, most of the proposed protocols were non-adaptive ones. But nowadays, adaptive quorum-based protocols have gained increasing attention, because we need protocols which can change their quorum size adaptively with network conditions. In this paper, we first introduce the most popular quorum systems and explain quorum system properties and its performance criteria. Then, we present a comparative and comprehensive survey of the non-adaptive and adaptive quorum-based protocols which are subsequently discussed in depth. We also present the comparison of different quorum systems in terms of the expected quorum overlap size (EQOS) and active ratio. Finally, we summarize the pros and cons of current adaptive and non-adaptive quorum-based protocols.
The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.
Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one- directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so- called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.
Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.
This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.