The Journal of Information Processing Systems
(JIPS) is the official international journal of the Korea Information Processing Society.
As information processing systems are progressing at a rapid pace, the Korea Information Processing Society is committed to providing researchers and other professionals
with the academic information and resources they need to keep abreast with ongoing developments. The JIPS aims to be a premier source that enables researchers and professionals
all over the world to promote, share, and discuss all major research issues and developments in the field of information processing systems and other related fields.
ISSN: 1976-913X (Print), ISSN: 2092-805X (Online)
[Jan. 01, 2018] Since January 01, 2018, the JIPS has started to manage the three manuscript tracks; 1) Regular Track, 2) Fast Track, and 3) Future Topic Track. Please refer to the details on the author information page.
[Dec. 29, 2017] We have selected the papers
of 2017 JIPS survey paper awards. Please
refer to here for details.
[Dec. 12, 2016] Call for papers about Special sections scheduled in 2017 are registered. Please refer to here for details.
[Aug. 1, 2016] Since August 2016, the JIPS has been indexed in "Emerging Sources Citation Index (ESCI)", a new Web of Science index managed by Thomson Reuters, launched in late 2015 for journals that have passed an initial evaluation for inclusion in SCI/SCIE/AHCI/SSCI indexes. Indexing in the ESCI will improve the visibility of the JIPS and provide a mark of quality. This achievement is good for all authors of the JIPS. For more information about ESCI, please see the ESCI fact sheet file.
Journal of Information Processing Systems, Vol. 14, No.3, 2018
Cloud computing, also known as country as you go”, is used to turn any computer into a dematerialized
architecture in which users can access different services. In addition to the daily evolution of stakeholders’
number and beneficiaries, the imbalance between the virtual machines of data centers in a cloud environment
impacts the performance as it decreases the hardware resources and the software’s profitability. Our axis of
research is the load balancing between a data center’s virtual machines. It is used for reducing the degree of
load imbalance between those machines in order to solve the problems caused by this technological evolution
and ensure a greater quality of service. Our article focuses on two main phases: the pre-classification of tasks,
according to the requested resources; and the classification of tasks into levels (‘odd levels’ or ‘even levels’) in
ascending order based on the meta-heuristic “Bat-algorithm”. The task allocation is based on levels provided
by the bat-algorithm and through our mathematical functions, and we will divide our system into a number
of virtual machines with nearly equal performance. Otherwise, we suggest different classes of virtual
machines, but the condition is that each class should contain machines with similar characteristics compared
to the existing binary search scheme.
Multivariate finite mixture model is becoming more and more popular in image processing. Performing
image denoising from image patches to the whole image has been widely studied and applied. However, there
remains a problem that the structure information is always ignored when transforming the patch into the
vector form. In this paper, we study the operator which extracts patches from image and then transforms
them to the vector form. Then, we find that some pixels which should be continuous in the image patches are
discontinuous in the vector. Due to the poor anti-noise and the loss of structure information, we propose a
new operator which may keep more information when extracting image patches. We compare the new
operator with the old one by performing image denoising in Expected Patch Log Likelihood (EPLL) method,
and we obtain better results in both visual effect and the value of PSNR.
Smart city is currently the main direction of development. The automatic management of instrumentation is
one task of the smart city. Because there are a lot of old instrumentation in the city that cannot be replaced
promptly, how to makes low-cost transformation with Internet of Thing (IoT) becomes a problem. This article
gives a low-cost method that can identify code wheel instrument information. This method can effectively
identify the information of image as the digital information. Because this method does not require a lot of
memory or complicated calculation, it can be deployed on a cheap microcontroller unit (MCU) with low readonly
memory (ROM). At the end of this article, test result is given. Using this method to modify the old
instrumentation can achieve the automatic management of instrumentation and can help build a smart city.
The problem surrounding methods of implementing the software testing process has come under the
spotlight in recent times. However, as compliance with the software testing process does not necessarily bring
with it immediate economic benefits, IT companies need to pursue more aggressive efforts to improve the
process, and the software industry needs to makes every effort to improve the software testing process by
evaluating the Test Maturity Model integration (TMMi). Furthermore, as the software test process is only at
the initial level, high-quality software cannot be guaranteed. This paper applies TMMi model to Automobile
control software testing process, including test policy and strategy, test planning, test monitoring and control,
test design and execution, and test environment goal. The results suggest improvement of the automobile
control software testing process based on Test maturity model. As a result, this study suggest IT
organization’s test process improve method.
Selection of a suitable task from the extensively available large set of tasks is an intricate job for the developers in crowdsourcing software development (CSD). Besides, it is also a tiring and a time-consuming job for the platform to evaluate thousands of tasks submitted by developers. Previous studies stated that managerial and technical aspects have prime importance in bringing success for software development projects, however, these two aspects can be more effective and conducive if combined with human aspects. The main purpose of this paper is to present a conceptual framework for task assignment model for future research on the basis of personality types, that will provide a basic structure for CSD workers to find suitable tasks and also a platform to assign the task directly. This will also match their personality and task. Because personality is an internal force which whittles the behavior of developers. Consequently, this research presented a Task Assignment Model (TAM) from a developers point of view, moreover, it will also provide an opportunity to the platform to assign a task to CSD workers according to their personality types directly.
Many studies on flash memory-based buffer replacement algorithms that consider the characteristics of flash
memory have recently been developed. Conventional flash memory-based buffer replacement algorithms
have the disadvantage that the operation speed slows down, because only the reference is checked when
selecting a replacement target page and either the reference count is not considered, or when the reference
time is considered, the elapsed time is considered. Therefore, this paper seeks to solve the problem of
conventional flash memory-based buffer replacement algorithm by dividing pages into groups and
considering the reference frequency and reference time when selecting the replacement target page. In
addition, because flash memory has a limited lifespan, candidates for replacement pages are selected based on
the number of deletions.
Information and communication technology (ICT) is increasingly recognized as an important driver of economic growth, innovation, employment and productivity and is widely accepted as a main feature of development. During the last couple of decades, ICT sector became the most innovative service sector that affected the living standards of human beings all over the world. In the beginning of the 21st century, some of the Asian countries made reforms in the ICT sector and spent an enormous amount for the progress of this sector. On the other hand, developed countries in the European Union (EU) faced different crises which badly affected the dissemination of this sector. Consequently, EU countries lost their hegemony in the field of information technology and resultantly, some of the emerging Asian countries like China, India, and South Korea got supremacy over the EU in this field. Currently, these countries have a strong IT infrastructure, R&D sector, IT research centers working for the development of ICT. Moreover, this paper investigates reasons for the shifting of the balance of digital power from Europe to Asia.
In efforts to increase its agricultural productivity, the Indonesian Center for Agricultural Biotechnology and
Genetic Resources Research and Development has conducted a variety of genomic studies using highthroughput
DNA genotyping and sequencing. The large quantity of data (big data) produced by these
biotechnologies require high performance data management system to store, backup, and secure data.
Additionally, these genetic studies are computationally demanding, requiring high performance processors
and memory for data processing and analysis. Reliable network connectivity with large bandwidth to transfer
data is essential as well as database applications and statistical tools that include cleaning, quality control,
querying based on specific criteria, and exporting to various formats that are important for generating high
yield varieties of crops and improving future agricultural strategies. This manuscript presents a reliable, secure,
and scalable information technology infrastructure tailored to Indonesian agriculture genotyping studies.
The discrete wavelet transform (DWT) has good multi-resolution decomposition characteristic and its low frequency component contains the basic information of an image. Based on this, a fragile watermarking using the local binary pattern (LBP) and DWT is proposed for image authentication. In this method, the LBP pattern of low frequency wavelet coefficients is adopted as a feature watermark, and it is inserted into the least significant bit (LSB) of the maximum pixel value in each block of host image. To guarantee the safety of the proposed algorithm, the logistic map is applied to encrypt the watermark. In addition, the locations of the maximum pixel values are stored in advance, which will be used to extract watermark on the receiving side. Due to the use of DWT, the watermarked image generated by the proposed scheme has high visual quality. Compared with other state-of-the-art watermarking methods, experimental results manifest that the proposed algorithm not only has lower watermark payloads, but also achieves good performance in tamper identification and localization for various attacks.
Nowadays, third-party applications form an important part of the mobile environment, and social networking
applications in particular can leave a variety of user footprints compared to other applications. Digital
forensics of mobile third-party applications can provide important evidence to forensics investigators.
However, most mobile operating systems are now updated on a frequent basis, and developers are constantly
releasing new versions of them. For these reasons, forensic investigators experience difficulties in finding the
locations and meanings of data during digital investigations. Therefore, this paper presents scenario-based
methods of forensic analysis for a specific third-party social networking service application on a specific
mobile device. When applied to certain third-party applications, digital forensics can provide forensic
investigators with useful data for the investigation process. The main purpose of the forensic analysis
proposed in the present paper is to determine whether the general use of third-party applications leaves data
in the mobile internal storage of mobile devices and whether such data are meaningful for forensic purposes.
China possesses a passenger dedicated line system of large scale, passenger flow intensity with uneven
distribution, and passenger nodes with complicated relations. Consequently, the significance of passenger
nodes shall be considered and the dissimilarity of passenger nodes shall be analyzed in compiling passenger
train operation and conducting transportation allocation. For this purpose, the passenger nodes need to be
hierarchically divided. Targeting at problems such as hierarchical dividing process vulnerable to subjective
factors and local optimum in the current research, we propose a clustering approach based on self-organizing
map (SOM) and k-means, and then, harnessing the new approach, hierarchical dividing of passenger
dedicated line passenger nodes is effectuated. Specifically, objective passenger nodes parameters are selected
and SOM is used to give a preliminary passenger nodes clustering firstly; secondly, Davies–Bouldin index is
used to determine the number of clusters of the passenger nodes; and thirdly, k-means is used to conduct
accurate clustering, thus getting the hierarchical dividing of passenger nodes. Through example analysis, the
feasibility and rationality of the algorithm was proved.
Database classification is an important preprocessing step for the multi-database mining (MDM). In fact,
when a multi-branch company needs to explore its distributed data for decision making, it is imperative to
classify these multiple databases into similar clusters before analyzing the data. To search for the best
classification of a set of n databases, existing algorithms generate from 1 to (n2–n)/2 candidate classifications.
Although each candidate classification is included in the next one (i.e., clusters in the current classification are
subsets of clusters in the next classification), existing algorithms generate each classification independently,
that is, without taking into account the use of clusters from the previous classification. Consequently, existing
algorithms are time consuming, especially when the number of candidate classifications increases. To
overcome the latter problem, we propose in this paper an efficient approach that represents the problem of
classifying the multiple databases as a problem of identifying the connected components of an undirected
weighted graph. Theoretical analysis and experiments on public databases confirm the efficiency of our
algorithm against existing works and that it overcomes the problem of increase in the execution time.
This study evaluates the viewpoints of user focus incidents using microblog sentiment analysis, which has
been actively researched in academia. Most existing works have adopted traditional supervised machine
learning methods to analyze emotions in microblogs; however, these approaches may not be suitable in
Chinese due to linguistic differences. This paper proposes a new microblog sentiment analysis method that
mines associated microblog emotions based on a popular microblog through user-building combined with
spectral clustering to analyze microblog content. Experimental results for a public microblog benchmark
corpus show that the proposed method can improve identification accuracy and save manually labeled time
compared to existing methods.
There are a great number of Internet-connected devices and their information can be acquired through an
Internet-wide scanning tool. By associating device information with publicly known security vulnerabilities,
security experts are able to determine whether a particular device is vulnerable. Currently, the identification
of the device information and its related vulnerabilities is manually carried out. It is necessary to automate the
process to identify a huge number of Internet-connected devices in order to analyze more than one hundred
thousand security vulnerabilities. In this paper, we propose a method of automatically generating device
information in the Common Platform Enumeration (CPE) format from banner text to discover potentially
weak devices having the Common Vulnerabilities Exposures (CVE) vulnerability. We demonstrated that our
proposed method can distinguish as much adequate CPE information as possible in the service banner.
Web applications are indispensable in the software industry and continuously evolve either meeting a newer
criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a
straightforward development is the presence of defects. Several factors contribute to defects and are often
minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of
software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a
web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the
relationship between object oriented metrics and fault prediction in web applications. The study is carried out
using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the
input basis set for each release is first optimized using filter based correlation feature selection (CFS) method.
It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical
analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these
metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction
models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc
analysis. The results not only upholds the predictive capability of machine learning models for faulty classes
using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in
Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and
the statistical analysis of the datasets.
In heterogeneous wireless networks supporting multi-access services, selecting the best network from among
the possible heterogeneous connections and providing seamless service during handover for a higher Quality
of Services (QoSs) is a big challenge. Thus, we need an intelligent vertical handover (VHO) decision using
suitable network parameters. In the conventional VHOs, various network parameters (i.e., signal strength,
bandwidth, dropping probability, monetary cost of service, and power consumption) have been used to
measure network status and select the preferred network. Because of various parameter features defined in
each wireless/mobile network, the parameter conversion between different networks is required for a
handover decision. Therefore, the handover process is highly complex and the selection of parameters is
always an issue. In this paper, we present how to maximize network utilization as more than one target
network exists during VHO. Also, we show how network parameters can be imbedded into IEEE 802.21-
based signaling procedures to provide seamless connectivity during a handover. The network simulation
showed that QoS-effective target network selection could be achieved by choosing the suitable parameters
from Layers 1 and 2 in each candidate network.
The simplified neutrosophic set (SNS) is a generalization of fuzzy set that is designed for some practical
situations in which each element has truth membership function, indeterminacy membership function and
falsity membership function. In this paper, we propose a new method to construct similarity measures of
single valued neutrosophic sets (SVNSs) and interval valued neutrosophic sets (IVNSs), respectively. Then we
prove that the proposed formulas satisfy the axiomatic definition of the similarity measure. At last, we apply
them to pattern recognition under the single valued neutrosophic environment and multi-criteria decisionmaking
problems under the interval valued neutrosophic environment. The results show that our methods
are effective and reasonable.
The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.
Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.
In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.
The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.
Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.
The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.
Since a social network by definition is so diverse, the problem of estimating the preferences of its users is becoming increasingly essential for personalized applications, which range from service recommender systems to the targeted advertising of services. However, unlike traditional estimation problems where the underlying target distribution is stationary; estimating a user"'"s interests typically involves non-stationary distributions. The consequent time varying nature of the distribution to be tracked imposes stringent constraints on the "unlearning” capabilities of the estimator used. Therefore, resorting to strong estimators that converge with a probability of 1 is inefficient since they rely on the assumption that the distribution of the user"'"s preferences is stationary. In this vein, we propose to use a family of stochastic-learning based Weak estimators for learning and tracking a user"'"s time varying interests. Experimental results demonstrate that our proposed paradigm outperforms some of the traditional legacy approaches that represent the state-of-the-art technology.
The most important criterion for achieving the maximum performance in a wireless mesh network (WMN) is to limit the interference within the network. For this purpose, especially in a multi-radio network, the best option is to use non-overlapping channels among different radios within the same interference range. Previous works that have considered non-overlapping channels in IEEE 802.11a as the basis for performance optimization, have considered the link quality across all channels to be uniform. In this paper, we present a measurement-based study of link quality across all channels in an IEEE 802.11a-based indoor WMN test bed. Our results show that the generalized assumption of uniform performance across all channels does not hold good in practice for an indoor environment and signal quality depends on the geometry around the me routers.
This paper describes different aspects of a typical RFID implementation. Section 1 provides a brief overview of the concept of Automatic Identification and compares the use of different technologies while Section 2 describes the basic components of a typical RFID system. Section 3 and Section 4 deal with the detailed specifications of RFID transponders and RFID interrogators respectively. Section 5 highlights different RFID standards and protocols and Section 6 enumerates the wide variety of applications where RFID systems are known to have made a positive improvement. Section 7 deals with privacy issues concerning the use of RFIDs and Section 8 describes common RFID system vulnerabilities. Section 9 covers a variety of RFID security issues, followed by a detailed listing of countermeasures and precautions in Section 10.
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
In earlier days, most of the data carried on communication networks was textual data requiring limited bandwidth. With the rise of multimedia and network technologies, the bandwidth requirements of data have increased considerably. If a network link at any time is not able to meet the minimum bandwidth requirement of data, data transmission at that path becomes difficult, which leads to network congestion. This causes delay in data transmission and might also lead to packet drops in the network. The retransmission of these lost packets would aggravate the situation and jam the network. In this paper, we aim at providing a solution to the problem of network congestion in mobile ad hoc networks [1, 2] by designing a protocol that performs routing intelligently and minimizes the delay in data transmission. Our Objective is to move the traffic away from the shortest path obtained by a suitable shortest path calculation algorithm to a less congested path so as to minimize the number of packet drops during data transmission and to avoid unnecessary delay. For this we have proposed a protocol named as Congestion Aware Selection Of Path With Efficient Routing (CASPER). Here, a router runs the shortest path algorithm after pruning those links that violate a given set of constraints. The proposed protocol has been compared with two link state protocols namely, OSPF [3, 4] and OLSR [5, 6, 7, 8].The results achieved show that our protocol performs better in terms of network throughput and transmission delay in case of bulky data transmission.
Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.
Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.
This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.
The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.
The trend of Next Generation Networks’ (NGN) evolution is towards providing multiple and multimedia services to users through ubiquitous networks. The aim of IP Multimedia Subsystem (IMS) is to integrate mobile communication networks and computer networks. The IMS plays an important role in NGN services, which can be achieved by heterogeneous networks and different access technologies. IMS can be used to manage all service related issues such as Quality of Service (QoS), Charging, Access Control, User and Services Management. Nowadays, internet technology is changing with each passing day. New technologies yield new impact to IMS. In this paper, we perform a survey of IMS and discuss the different impacts of new technologies on IMS such as P2P, SCIM, Web Service and its security issues.
Due to the convergence of voice, data, and video, today’s telecom operators are facing the complexity of service and network management to offer differentiated value-added services that meet customer expectations. Without the operations support of well-developed Business Support System/Operations Support System (BSS/OSS), it is difficult to timely and effectively provide competitive services upon customer request. In this paper, a suite of NGOSS-based Telecom OSS (TOSS) is developed for the support of fulfillment and assurance operations of telecom services and IT services. Four OSS groups, TOSS-P (intelligent service provisioning), TOSS-N (integrated large-scale network management), TOSS-T (trouble handling and resolution), and TOSS-Q (end-to-end service quality management), are organized and integrated following the standard telecom operation processes (i.e., eTOM). We use IPTV and IP-VPN operation scenarios to show how these OSS groups co-work to support daily business operations with the benefits of cost reduction and revenue acceleration.
By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG¡¯s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG¡¯s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users ¡°Mobility¡± and ¡°Roaming¡±. For wireless networks to achieve ¡°Mobility¡± and ¡°Roaming,¡± we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
With regard to ethical standards, the JIPS takes plagiarism very seriously and thoroughly checks all articles.
The JIPS defines research ethics as securing objectivity and accuracy in the execution of research and the conclusion of results without any unintentional errors resulting from negligence or incorrect knowledge, etc.
and without any intentional misconduct such as falsification, plagiarism, etc. When an author submits a paper to the JIPS online submission and peer-review system,
he/she should also upload the separate file "author check list" which contains a statement that all his/her research has been performed in accordance with ethical standards.
Among the JIPS editorial board members, there are four associate manuscript editors who support the JIPS by dealing with any ethical problems associated with the publication process
and give advice on how to handle cases of suspected research and publication misconduct. When the JIPS managing editor looks over submitted papers and checks that they are suitable for further processing,
the managing editor also routes them to the CrossCheck service provided by iTenticate. Based on the results provided by the CrossCheck service, the JIPS associate manuscript editors inform the JIPS editor-in-chief of any plagiarism that is detected in a paper.
Then, the JIPS editor-in-chief communicates such detection to the author(s) while rejecting the paper.
Since 2005, all papers published in the JIPS are subjected to a peer review and upon acceptance are immediately made
permanently available free of charge for everyone worldwide to read and download from the journal’s homepage (http://www.jips-k.org)
without any subscription fee or personal registration. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The KIPS waives paper processing charges for submissions from international authors as well as society members. This waiver policy supports and encourages the publication of quality papers, making the journal an international forum for the exchange of different ideas and experiences.