The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[Jan. 01, 2018] Since January 01, 2018, the JIPS has started to manage the three manuscript tracks; 1) Regular Track, 2) Fast Track, and 3) Future Topic Track. Please refer to the details on the author information page.
[Dec. 29, 2017] We have selected the papers
of 2017 JIPS survey paper awards. Please
refer to here for details.
[Dec. 12, 2016] Call for papers about Special sections scheduled in 2017 are registered. Please refer to here for details.
[Aug. 1, 2016] Since August 2016, the JIPS has been indexed in "Emerging Sources Citation Index (ESCI)", a new Web of Science index managed by Thomson Reuters, launched in late 2015 for journals that have passed an initial evaluation for inclusion in SCI/SCIE/AHCI/SSCI indexes. Indexing in the ESCI will improve the visibility of the JIPS and provide a mark of quality. This achievement is good for all authors of the JIPS. For more information about ESCI, please see the ESCI fact sheet file.
Journal of Information Processing Systems, Vol. 14, No.2, 2018
The Journal of Information Processing Systems (JIPS) has such indices as ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef, and has are four divisions: Computer systems and theory, Multimedia systems and graphics, Communication systems and security, and Information systems and applications. Published by the Korean Information Processing Society (KIPS), JIPS places special emphasis on hot research topics such as artificial intelligence, network, databases, and security.
The availability of powerful and sensor-enabled mobile and Internet-connected devices have enabled the
advent of the ubiquitous sensor network (USN) paradigm. USN provides various types of solutions to the
general public in multiple sectors, including environmental monitoring, entertainment, transportation,
security, and healthcare. Here, we explore and compare the features of wireless sensor networks and USN.
Based on our extensive study, we classify the security- and privacy-related challenges of USNs. We identify
and discuss solutions available to address these challenges. Finally, we briefly discuss open challenges for
designing more secure and privacy-preserving approaches in next-generation USNs.
In the conventional computing environment, users use only a small number of software systems intensively.
So it had been enough to check and guarantee the functional correctness and safety of a small number of giant
systems in order to protect the user systems and their information inside the systems from outside attacks.
However, checking the correctness and safety of giant systems is not enough anymore, since users are using
various software systems or web services provided by unskilled developers. To prove or guarantee the safety of
software system, a lot of research has been conducted in diverse areas of computer science. We will discuss the
on-going approaches for guaranteeing or verifying the safety of software systems in this paper. We also
discuss the future research challenge which must be solved with better solutions in the near future.
Climate change has become a major challenge for sustainable development of human society. This study is an
attempt to analyze existing literature to identify economic indicators that hamper the process of global
warming. This paper includes case studies based on various countries to examine the nexus for environment
and its relationship with Foreign Direct Investment, transportation, economic growth and energy
consumption. Furthermore, the observations are analyzed from the perspective of China-Pakistan Economic
Corridor (CPEC) and probable impact on carbon emission of Pakistan. A major portion of CPEC investment is
allocated for transportation. However, it is evident that transportation sector is substantial emitter of carbon
dioxide (CO2) gas. Unfortunately, there is no empirical work on the subject of CPEC and carbon emission for
vehicular transportation. This paper infers that empirical results from various other countries are ambiguous
and inconclusive. Moreover, the evidence for the pollution haven hypothesis and the halo effect hypothesis is
limited in general and inapplicable for CPEC in particular. The major contribution of this study is the proposal
of an energy efficient transportation model for reducing CO2 emission. In the end, the paper suggests
strategies to climate researchers and policymakers for adaptation and mitigation of greenhouse gases (GHG).
Digital forensics is a vital part of almost every criminal investigation given the amount of information
available and the opportunities offered by electronic data to investigate and evidence a crime. However, in
criminal justice proceedings, these electronic pieces of evidence are often considered with the utmost
suspicion and uncertainty, although, on occasions are justifiable. Presently, the use of scientifically unproven
forensic techniques are highly criticized in legal proceedings. Nevertheless, the exceedingly distinct and
dynamic characteristics of electronic data, in addition to the current legislation and privacy laws remain as
challenging aspects for systematically attesting evidence in a court of law. This article presents a
comprehensive study to examine the issues that are considered essential to discuss and resolve, for the proper
acceptance of evidence based on scientific grounds. Moreover, the article explains the state of forensics in
emerging sub-fields of digital technology such as, cloud computing, social media, and the Internet of
Things (IoT), and reviewing the challenges which may complicate the process of systematic validation of
electronic evidence. The study further explores various solutions previously proposed, by researchers and
academics, regarding their appropriateness based on their experimental evaluation. Additionally, this
article suggests open research areas, highlighting many of the issues and problems associated with the
empirical evaluation of these solutions for immediate attention by researchers and practitioners. Notably,
academics must react to these challenges with appropriate emphasis on methodical verification. Therefore,
for this purpose, the issues in the experiential validation of practices currently available are reviewed in this
study. The review also discusses the struggle involved in demonstrating the reliability and validity of these
approaches with contemporary evaluation methods. Furthermore, the development of best practices,
reliable tools and the formulation of formal testing methods for digital forensic techniques are highlighted
which could be extremely useful and of immense value to improve the trustworthiness of electronic
evidence in legal proceedings.
Aiming at the problem of service reliability in resource reservation in cloud computing environments, a model of dynamic cloud resource reservation based on trust is proposed. A domain-specific cloud management architecture is designed in which resources are divided into different management domains according to the types of service for easier management. A dynamic resource reservation mechanism (DRRM) is used to test users’ reservation requests and reserve resources for users. According to user preference, several resources are chosen to be candidate resources by fuzzy cluster analysis. The fuzzy evaluation method and a two-way trust evaluation mechanism are adopted to improve the availability and credibility of the model. An analysis and simulation experiments show that this model can increase the flexibility of resource reservation and improve user satisfaction.
GR-tree and query aggregation techniques have been proposed for spatial query processing in conventional spatial query processing for wireless sensor networks. Although these spatial query processing techniques consider spatial query optimization, time query optimization is not taken into consideration. The index reorganization cost and communication cost for the parent sensor nodes increase the energy consumption that is required to ensure the most efficient operation in the wireless sensor node. This paper proposes itinerary-based R-tree (IR-tree) for more efficient spatial-temporal query processing in wireless sensor networks. This paper analyzes the performance of previous studies and IR-tree, which are the conventional spatial query processing techniques, with regard to the accuracy, energy consumption, and query processing time of the query results using the wireless sensor data with Uniform, Gauss, and Skew distributions. This paper proves the superiority of the proposed IR-tree-based space-time indexing.
With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.
Fingerprint-based biometric identification is one of the most interesting automatic systems for identifying individuals. Owing to the poor sensing environment and poor quality of skin, biometrics remains a challenging problem. The main contribution of this paper is to propose a new approach to recognizing a person’s fingerprint using the fingerprint’s local characteristics. The proposed approach introduces the barycenter notion applied to triangles formed by the Delaunay triangulation once the extraction of minutiae is achieved. This ensures the exact location of similar triangles generated by the Delaunay triangulation in the recognition process. The results of an experiment conducted on a challenging public database (i.e., FVC2004) show significant improvement with regard to fingerprint identification compared to simple Delaunay triangulation, and the obtained results are very encouraging.
Artificial bee colony (ABC) algorithm has attracted significant interests recently for solving the multivariate optimization problem. However, it still faces insufficiency of slow convergence speed and poor local search ability. Therefore, in this paper, a modified ABC algorithm with bees’ number reallocation and new search equation is proposed to tackle this drawback. In particular, to enhance solution accuracy, more bees in the population are assigned to execute local searches around food sources. Moreover, elite vectors are adopted to guide the bees, with which the algorithm could converge to the potential global optimal position rapidly. A series of classical benchmark functions for frequency-modulated sound waves are adopted to validate the performance of the modified ABC algorithm. Experimental results are provided to show the significant performance improvement of our proposed algorithm over the traditional version.
The concepts of graph theory are applied to model and analyze dynamics of computer networks, biochemical networks and, semantics of social networks. The analysis of dynamics of complex networks is important in order to determine the stability and performance of networked systems. The analysis of non-stationary and nonlinear complex networks requires the applications of ordinary differential equations (ODE). However, the process of resolving input excitation to the dynamic non-stationary networks is difficult without involving external functions. This paper proposes an analytical formulation for generating solutions of nonlinear network ODE systems with functional decomposition. Furthermore, the input excitations are analytically resolved in linearized dynamic networks. The stability condition of dynamic networks is determined. The proposed analytical framework is generalized in nature and does not require any domain or range constraints.
We propose an enhanced version of the local binary pattern (LBP) operator for texture extraction in images in the context of image retrieval. The novelty of our proposal is based on the observation that the LBP exploits only the lowest kind of local information through the global histogram. However, such global Histograms reflect only the statistical distribution of the various LBP codes in the image. The block based LBP, which uses local histograms of the LBP, was one of few tentative to catch higher level textural information. We believe that important local and useful information in between the two levels is just ignored by the two schemas. The newly developed method: gradual locality integration of binary patterns (GLIBP) is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure. The used framework is comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The framework of ellipses is in fact derived from a simple parameterized generator. In addition, the elliptic forms allow targeting texture directionality, which is a very useful property in texture characterization. In addition, the general framework of ellipses allows for taking into account the spatial information (specifically rotation). The effectiveness of GLIBP was investigated on the Corel-1K (Wang) dataset. It was also compared to published works including the very effective DLEP. Results show significant higher or comparable performance of GLIBP with regard to the other methods, which qualifies it as a good tool for scene images retrieval.
The purpose of the high-speed railway construction is to better satisfy passenger travel demands. Accordingly, the design of the train working plan must also take a full account of the interests of passengers. Aiming at problems, such as the complex transport organization and different speed trains coexisting, combined with the existing research on the train working plan optimization model, the multiobjective bi-level programming model of the high-speed railway passenger train working plan was established. This model considers the interests of passengers as the center and also takes into account the interests of railway transport enterprises. Specifically, passenger travel cost and travel time minimizations are both considered as the objectives of upper-level programming, whereas railway enterprise profit maximization is regarded as the objective of the lower-level programming. The model solution algorithm based on genetic algorithm was proposed. Through an example analysis, the feasibility and rationality of the model and algorithm were proved.
Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multicore environment. In this paper, we propose an efficient hybrid transactional memory scheme using nearoptimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.
As the major source of information, digital images play an indispensable role in our lives. However, with the development of image processing techniques, people can optionally retouch or even forge an image by using image processing software. Therefore, the authenticity and integrity of digital images are facing severe challenge. To resolve this issue, the fragile watermarking schemes for image authentication have been proposed. According to different purposes, the fragile watermarking can be divided into two categories: fragile watermarking for tamper localization and fragile watermarking with recovery ability. The fragile watermarking for image tamper localization can only identify and locate the tampered regions, but it cannot further restore the modified regions. In some cases, image recovery for tampered regions is very essential. Generally, the fragile watermarking for image authentication and recovery includes three procedures: watermark generation and embedding, tamper localization, and image self-recovery. In this article, we make a review on self-embedding fragile watermarking methods. The basic model and the evaluation indexes of this watermarking scheme are presented in this paper. Some related works proposed in recent years and their advantages and disadvantages are described in detail to help the future research in this field. Based on the analysis, we give the future research prospects and suggestions in the end.
Keystroke dynamics user authentication is a behavior-based authentication method which analyzes patterns in how a user enters passwords and PINs to authenticate the user. Even if a password or PIN is revealed to another user, it analyzes the input pattern to authenticate the user; hence, it can compensate for the drawbacks of knowledge-based (what you know) authentication. However, users' input patterns are not always fixed, and each user's touch method is different. Therefore, there are limitations to extracting the same features for all users to create a user's pattern and perform authentication. In this study, we perform experiments to examine the changes in user authentication performance when using feature vectors customized for each user versus using all features. User customized features show a mean improvement of over 6% in error equal rate, as compared to when all features are used.
Images are unavoidably contaminated with different types of noise during the processes of image acquisition and transmission. The main forms of noise are impulse noise (is also called salt and pepper noise) and Gaussian noise. In this paper, an effective method of removing mixed noise from images is proposed. In general, different types of denoising methods are designed for different types of noise; for example, the median filter displays good performance in removing impulse noise, and the wavelet denoising algorithm displays good performance in removing Gaussian noise. However, images are affected by more than one type of noise in many cases. To reduce both impulse noise and Gaussian noise, this paper proposes a denoising method that combines adaptive median filtering (AMF) based on impulse noise detection with the wavelet threshold denoising method based on a Gaussian mixture model (GMM). The simulation results show that the proposed method achieves much better denoising performance than the median filter or the wavelet denoising method for images contaminated with mixed noise.
With the advent of the information society, image restoration technology has aroused considerable interest. Guided image filtering is more effective in suppressing noise in homogeneous regions, but its edge-preserving property is poor. As such, the critical part of guided filtering lies in the selection of the guided image. The result of the Expected Patch Log Likelihood (EPLL) method maintains a good structure, but it is easy to produce the ladder effect in homogeneous areas. According to the complementarity of EPLL with guided filtering, we propose a method of coupling EPLL and guided filtering for image de-noising. The EPLL model is adopted to construct the guided image for the guided filtering, which can provide better structural information for the guided filtering. Meanwhile, with the secondary smoothing of guided image filtering in image homogenization areas, we can improve the noise suppression effect in those areas while reducing the ladder effect brought about by the EPLL. The experimental results show that it not only retains the excellent performance of EPLL, but also produces better visual effects and a higher peak signal-to-noise ratio by adopting the proposed method.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.