The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[Jan. 01, 2018] Since January 01, 2018, the JIPS has started to manage the three manuscript tracks; 1) Regular Track, 2) Fast Track, and 3) Future Topic Track. Please refer to the details on the author information page.
[Dec. 29, 2017] We have selected the papers
of 2017 JIPS survey paper awards. Please
refer to here for details.
[Dec. 12, 2016] Call for papers about Special sections scheduled in 2017 are registered. Please refer to here for details.
[Aug. 1, 2016] Since August 2016, the JIPS has been indexed in "Emerging Sources Citation Index (ESCI)", a new Web of Science index managed by Thomson Reuters, launched in late 2015 for journals that have passed an initial evaluation for inclusion in SCI/SCIE/AHCI/SSCI indexes. Indexing in the ESCI will improve the visibility of the JIPS and provide a mark of quality. This achievement is good for all authors of the JIPS. For more information about ESCI, please see the ESCI fact sheet file.
Journal of Information Processing Systems, Vol. 14, No.4, 2018
Speaker verification system performance depends on the utterance of each speaker. To verify the speaker,
important information has to be captured from the utterance. Nowadays under the constraints of limited
data, speaker verification has become a challenging task. The testing and training data are in terms of few
seconds in limited data. The feature vectors extracted from single frame size and rate (SFSR) analysis is not
sufficient for training and testing speakers in speaker verification. This leads to poor speaker modeling during
training and may not provide good decision during testing. The problem is to be resolved by increasing
feature vectors of training and testing data to the same duration. For that we are using multiple frame size
(MFS), multiple frame rate (MFR), and multiple frame size and rate (MFSR) analysis techniques for speaker
verification under limited data condition. These analysis techniques relatively extract more feature vector
during training and testing and develop improved modeling and testing for limited data. To demonstrate this
we have used mel-frequency cepstral coefficients (MFCC) and linear prediction cepstral coefficients (LPCC)
as feature. Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) are used
for modeling the speaker. The database used is NIST-2003. The experimental results indicate that, improved
performance of MFS, MFR, and MFSR analysis radically better compared with SFSR analysis. The
experimental results show that LPCC based MFSR analysis perform better compared to other analysis
techniques and feature extraction techniques.
The automatic extraction of temporal information from written texts is a key component of question
answering and summarization systems and its efficacy in those systems is very decisive if a temporal
expression (TE) is successfully extracted. In this paper, three different approaches for TE extraction in
Uyghur are developed and analyzed. A novel approach which uses lexical semantics as an additional
information is also presented to extend classical approaches which are mainly based on morphology and
syntax. We used a manually annotated news dataset labeled with TIMEX3 tags and generated three models
with different feature combinations. The experimental results show that the best run achieved 0.87 for
Precision, 0.89 for Recall, and 0.88 for F1-Measure in Uyghur TE extraction. From the analysis of the results,
we concluded that the application of semantic knowledge resolves ambiguity problem at shallower language
analysis and significantly aids the development of more efficient Uyghur TE extraction system.
Bitcoin is a decentralized crypto-currency, which is based on the peer-to-peer network, and was introduced
by Satoshi Nakamoto in 2008. Bitcoin transactions are written by using a scripting language. The hash value
of a transaction’s script is used to identify the transaction over the network. In February 2014, a Bitcoin
exchange company, Mt. Gox, claimed that they had lost hundreds of millions US dollars worth of Bitcoins in
an attack known as transaction malleability. Although known about since 2011, this was the first known
attack that resulted in a company loosing multi-millions of US dollars in Bitcoins. Our reason for writing this
paper is to understand Bitcoin transaction malleability and to propose an efficient solution. Our solution is a
softfork (i.e., it can be gradually implemented). Towards the end of the paper we present a detailed analysis of
our scheme with respect to various transaction malleability-based attack scenarios to show that our simple
solution can prevent future incidents involving transaction malleability from occurring. We compare our
scheme with existing approaches and present an analysis regarding the computational cost and storage
requirements of our proposed solution, which shows the feasibility of our proposed scheme.
Dynamic time warping (DTW) is the main algorithms for time series alignment. However, it is unsuitable for
quasi-periodic time series. In the current situation, except the recently published the shape exchange
algorithm (SEA) method and its derivatives, no other technique is able to handle alignment of this type of
very complex time series. In this work, we propose a novel algorithm that combines the advantages of the SEA
and the DTW methods. Our main contribution consists in the elevation of the DTW power of alignment
from the lowest level (Class A, non-periodic time series) to the highest level (Class C, multiple-periods time
series containing different number of periods each), according to the recent classification of time series
alignment methods proposed by Boucheham (Int J Mach Learn Cybern, vol. 4, no. 5, pp. 537-550, 2013). The
new method (quasi-periodic dynamic time warping [QP-DTW]) was compared to both SEA and DTW
methods on electrocardiogram (ECG) time series, selected from the Massachusetts Institute of Technology -
Beth Israel Hospital (MIT-BIH) public database and from the PTB Diagnostic ECG Database. Results show
that the proposed algorithm is more effective than DTW and SEA in terms of alignment accuracy on both
qualitative and quantitative levels. Therefore, QP-DTW would potentially be more suitable for many
applications related to time series (e.g., data mining, pattern recognition, search/retrieval, motif discovery,
In this paper, we propose a novel algorithm for rendering motion-blurred shadows utilizing a depth-time
ranges shadow map. First, we render a scene from a light source to generate a shadow map. For each pixel in
the shadow map, we store a list of depth-time ranges. Each range has two points defining a period where a
particular geometry was visible to the light source and two distances from the light. Next, we render the scene
from the camera to perform shadow tests. With the depths and times of each range, we can easily sample the
shadow map at a particular receiver and time. Our algorithm runs entirely on GPUs and solves various
problems encountered by previous approaches.
The paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.
We demonstrate how social media content can be used to predict the unemployment rate, a real-world
indicator. We present a novel method for predicting the unemployment rate using social media analysis based
on natural language processing and statistical modeling. The system collects social media contents including
news articles, blogs, and tweets written in Korean, and then extracts data for modeling using part-of-speech
tagging and sentiment analysis techniques. The autoregressive integrated moving average with exogenous
variables (ARIMAX) and autoregressive with exogenous variables (ARX) models for unemployment rate
prediction are fit using the analyzed data. The proposed method quantifies the social moods expressed in
social media contents, whereas the existing methods simply present social tendencies. Our model derived a
27.9% improvement in error reduction compared to a Google Index-based model in the mean absolute
percentage error metric.
The academic research performance is often quantitatively measured by means of using citation frequency.
The citation frequency-based indicators, such as h-index and impact factor, are commonly used reflecting the
citation quality to some extent. However, these frequency-based indicators are usually carried out based on
the assumption that all citations are equal. This may lead to biased evaluations in that, the attributes of the
citing objects and cited objects are significant. A high-accuracy evaluation method is needed. In this paper, we
review various citation quality-based evaluation indicators, and categorize them considering the algorithms
being applied. We discuss the pros and cons of these indicators, and compare them from four dimensions.
The outcomes will be useful for our further research on distinguishing citation quality.
In clustering-based approaches, cluster heads closer to the sink are usually burdened with much more relay
traffic and thus, tend to die early. To address this problem, distance-aware clustering approaches, such as
energy-efficient unequal clustering (EEUC), that adjust the cluster size according to the distance between the
sink and each cluster head have been proposed. However, the network lifetime of such approaches is highly
dependent on the distribution of the sensor nodes, because, in randomly distributed sensor networks, the
approaches do not guarantee that the cluster energy consumption will be proportional to the cluster size. To
address this problem, we propose a novel approach called CACD (Clustering Algorithm Considering node
Distribution), which is not only distance-aware but also node density-aware approach. In CACD, clusters are
allowed to have limited member nodes, which are determined by the distance between the sink and the cluster
head. Simulation results show that CACD is 20%–50% more energy-efficient than previous work under
various operational conditions considering the network lifetime.
Since the amplitudes of spin echo train in nuclear magnetic resonance logging (NMRL) are small and the
signal to noise ratio (SNR) is also very low, this paper puts forward an improved de-noising algorithm based
on wavelet transformation. The steps of this improved algorithm are designed and realized based on the
characteristics of spin echo train in NMRL. To test this improved de-noising algorithm, a 32 points forward
model of big porosity is build, the signal of spin echo sequence with adjustable SNR are generated by this
forward model in an experiment, then the median filtering, wavelet hard threshold de-noising, wavelet soft
threshold de-noising and the improved de-noising algorithm are compared to de-noising these signals, the
filtering effects of these four algorithms are analyzed while the SNR and the root mean square error (RMSE)
are also calculated out. The results of this experiment show that the improved de-noising algorithm can
improve SNR from 10 to 27.57, which is very useful to enhance signal and de-nosing noise for spin echo train
Many mobile sensing frameworks have been developed to help researcher doing their mobile sensing
research. However, energy consumption is still an issue in the mobile sensing research, and the existing
frameworks do not provide enough solution for solving the issue. We have surveyed several mobile sensing
frameworks and carefully chose one framework to improve. We have designed an adaptive sampling module
for a mobile sensing framework to help solve the energy consumption issue. However, in this study, we limit
our design to an adaptive sampling module for the location and motion sensors. In our adaptive sampling
module, we utilize the significant motion sensor to help the adaptive sampling. We experimented with two
sampling strategies that utilized the significant motion sensor to achieve low-power consumption during the
continuous sampling. The first strategy is to utilize the sensor naively only while the second one is to add the
duty cycle to the naive approach. We show that both strategies achieve low energy consumption, but the one
that is combined with the duty cycle achieves better result.
For many years, matching in a bipartite graph has been widely used in various assignment problems, such as
stable marriage problem (SMP). As an application of bipartite matching, the problem of stable marriage is
defined over equally sized sets of men and women to identify a stable matching in which each person is
assigned a partner of opposite gender according to their preferences. The classical SMP proposed by Gale and
Shapley uses preference lists for each individual (men and women) which are infeasible in real world
applications for a large populace of men and women such as matrimonial websites. In this paper, we have
proposed an enhancement to the SMP by computing a weighted score for the users registered at matrimonial
websites. The proposed enhancement has been formulated into profit maximization of matrimonial websites
in terms of their ability to provide a suitable match for the users. The proposed formulation to maximize the
profits of matrimonial websites leads to a combinatorial optimization problem. We have proposed greedy and
genetic algorithm based approaches to solve the proposed optimization problem. We have shown that the
proposed genetic algorithm based approaches outperform the existing Gale-Shapley algorithm on the dataset
crawled from matrimonial websites.
Motion estimation is a key Natural User Interface/Natural User Experience (NUI/NUX) technology to utilize motions as commands. HTC VIVE is an excellent device for estimating motions but only considers the positions of hands, not the orientations of arms. Even if the positions of the hands are the same, the meaning of motions can differ according to the orientations of the arms. Therefore, when the positions of arms are measured and utilized, their orientations should be estimated as well. This paper proposes a method for estimating the arm orientations based on the Bayesian probability of the hand positions measured in advance. In experiments, the proposed method was used to measure the hand positions with HTC VIVE. The results showed that the proposed method estimated orientations with an error rate of about 19%, but the possibility of estimating the orientation of any body part without additional devices was demonstrated.
Rapid advances in science and technology with exponential development of smart mobile devices,
workstations, supercomputers, smart gadgets and network servers has been witnessed over the past few years.
The sudden increase in the Internet population and manifold growth in internet speeds has occasioned the
generation of an enormous amount of data, now termed ‘big data’. Given this scenario, storage of data on
local servers or a personal computer is an issue, which can be resolved by utilizing cloud computing. At
present, there are several cloud computing service providers available to resolve the big data issues. This paper
establishes a framework that builds Hadoop clusters on the new single-board computer (SBC) Mobile
Raspberry Pi. Moreover, these clusters offer facilities for storage as well as computing. Besides the fact that the
regular data centers require large amounts of energy for operation, they also need cooling equipment and
occupy prime real estate. However, this energy consumption scenario and the physical space constraints can
be solved by employing a Mobile Raspberry Pi with Hadoop clusters that provides a cost-effective, low-power,
high-speed solution along with micro-data center support for big data. Hadoop provides the required
modules for the distributed processing of big data by deploying map-reduce programming approaches. In this
work, the performance of SBC clusters and a single computer were compared. It can be observed from the
experimental data that the SBC clusters exemplify superior performance to a single computer, by around 20%.
Furthermore, the cluster processing speed for large volumes of data can be enhanced by escalating the
number of SBC nodes. Data storage is accomplished by using a Hadoop Distributed File System (HDFS),
which offers more flexibility and greater scalability than a single computer system.
Recent advances in medical science have made people live longer, which has affected many aspects of life,
such as caregiver burden, increasing cost of healthcare, increasing number of disabled and depressive disorder
persons, and so on. Researchers are now focused on elderly living assistance services in smart home
environments. In recent years, assisted living technologies have rapidly grown due to a faster growing aging
society. Many smart devices are now interconnected within the home network environment and such a home
setup supports collaborations between those devices based on the Internet of Things (IoT). One of the major
challenges in providing elderly living assistance services is to consider each individual’s requirements of
different needs. In order to solve this, the virtualization of physical things, as well as the collaboration and
composition of services provided by these physical things should be considered. In order to meet these
challenges, Web of Objects (WoO) focuses on the implementation aspects of IoT to bring the assorted real
world objects with the web applications. We proposed a semantic modelling technique for manual and semiautomated
service composition. The aim of this work is to propose a framework to enable RESTful web
services composition using semantic ontology for elderly living assistance services creation in WoO based
smart home environment.
This paper proposes an integrated lighting enabler system (ILES) based on standard machine-to-machine
(M2M) platforms. This system provides common services of end-to-and M2M communication for smart
lighting system. It is divided into two sub-systems, namely end-device system and server system. On the
server side, the M2M platform OpenMTC is used to receive data from the sensors and send response for
activating actuators. At the end-device system, a programmable smart lighting device is connected to the
actuators and sensors for communicating their data to the server. Some experiments have been done to prove
the system concept. The experiment results show that the proposed integrated lighting enabler system is
effective to reduce the power consumption by 25.22% (in average). The proving of significance effect in
reducing power consumption is measured by the Wilcoxon method.
As mobile devices such as smartphones and tablet PCs become more popular, users are becoming accustomed
to consuming a massive amount of multimedia content every day without time or space limitations. From the
industry, the need for user satisfaction investigation has consequently emerged. Conventional methods to
investigate user satisfaction usually employ user feedback surveys or interviews, which are considered manual,
subjective, and inefficient. Therefore, the authors focus on a more objective method of investigating users’
brainwaves to measure how much they enjoy their content. Particularly for multimedia content, it is natural
that users will be immersed in the played content if they are satisfied with it. In this paper, the authors
propose a method of using a portable and dry electroencephalogram (EEG) sensor device to overcome the
limitations of the existing conventional methods and to further advance existing EEG-based studies. The
proposed method uses a portable EEG sensor device that has a small, dry (i.e., not wet or adhesive), and
simple sensor using a single channel, because the authors assume mobile device environments where users
consider the features of portability and usability to be important. This paper presents how to measure
attention, gauge and compute a score of user’s content immersion level after addressing some technical details
related to adopting the portable EEG sensor device. Lastly, via an experiment, the authors verified a
meaningful correlation between the computed scores and the actual user satisfaction scores.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.