It is widely accepted that the coverage with high user densities can only be achieved with small cell such as micro- and pico-cell. The smaller cell size causes frequent handovers between cells and a decrease in the permissible handover processing delay. This may result in the handover failure, in addition to the loss of some packets during the handover. In these cases, re-transmission is needed in order to compensate errors, which triggers a rapid degradation of throughput. In this paper, we propose a new handover scheme in the next generation mobile communication systems, in which the handover setup process is done in advance before a handover request by predicting the handover cell based on mobile terminal's current position and moving direction. Simulation is focused on the handover failure rate and packet loss rate. The simulation results show that our proposed method provides a better performance than the conventional method.
This paper describes the technique which utilizes a fuzzy neural network to sketch feature extraction in digital images. We configure an artificial neural network and make it learn fuzzy membership functions to decide a local threshold applying to sketch feature extraction. To do this, we put the learning data which is membership functions generated based on optimal feature map of a few standard images into the artificial neural network. The proposed technique extracts sketch features in an image very effectively and rapidly because the input fuzzy variable have some desirable characteristics for feature extraction such as dependency of local intensity and excellent performance and the proposed fuzzy neural network is learned from their membership functions, We show that the fuzzy neural network has a good performance in extracting sketch features without human intervention.
This study applies the cognitive apprenticeship theory, a representative learning theory of constructivism, to design and create a web courseware for data device operator license, to enable research that begins with peripheral participation in problem solving and ends with full participation and initiative, to act as a medium for assisting students in learning, to enable adaptation to actual situations through simulation studies, to allow aggressive interaction, and to help reinforce the level of data processing with regard to learning. The student was made to evaluate learning materials at real time for feedback on insufficient areas, to enable effective learning. The study was done by offering a web courseware without applying the cognitive apprenticeship theory and a web courseware with the cognitive apprenticeship theory, which was followed by an evaluation on study achievement level and learning behavior. and then a survey was done after the evaluations. The results of this study were first, the learning group with web courseware applying cognitive apprenticeship theory showed more effect in improving learning achievement than the group with web courseware without the cognitive apprenticeship theory. Secondly, learning with web courseware applying cognitive apprenticeship theory was more effective for improving learning behavior.
This paper presents a novel approach to real-time recognition of 3D environment and objects for various applications such as intelligent robots, intelligent vehicles, intelligent buildings,..etc. First, we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds.
The range-based method is easy to get the 3D data in detail, but the image-based is not. In this paper, employing the magnifying lens, the new approach to get the 3D data in detail is suggested. The magnifying lens amplifies the disparity in stereo vision system and the amplification of disparity is to increase the resolution of the depth. We mathematically and experimentally verifies the fact to amplify the disparity and suggests the method to improve the original 3D data with the detail 3D data.
A special feature of P2P distributed system isn't always the guarantee of online status for peers. In other words we want to download the file from the peer when we use the P2P system but it sometimes caused this system to fail the download. Many studies to resolve this problem depend on re-transmission method. It caused to lower performance so we have to resolve this problem. In this study, we analysis the average usage time of P2P application user and raise the resource transmission guarantee to apply the selection criteria of resource supplier. Moreover the combinations of distributed object replication techniques, the role to enhance the data transmission opportunity of high popularity resource, will cause this search algorithm to advance.
In this paper, proposes Multi_Kerberos certification mechanism that improve certification service of based on PKINIT that made public in IETF CAT Working Group. This paper proposed to a certificate other realm because search position of outside realm through DNS and apply X.509 directory certification system, to get public key from DNS server by chain (CertPath) between realms by certification and key exchange way that provide service between realms applying X.509, DS/DNS of based on PKINIT, in order to provide regional services. This paper proposed mechanism that support efficient certification service about cross realm including key management, the path generation and construction of Certificate using Validation Server, and recovery of Session Key. A Design of Multi_Kerberos system that have effects simplify of certification formality that reduce procedures on communication.
Because computer system is complex gradually from fast growth of the Internet and sudden increase of Internet user and scale is growing, and dependence for computer system is rising gradually, data damage by mistake or disaster is big enough to can not imagine. Usually, is continuous if wish to inform through internet and must update data stably, we should manage so that precious data is not erased from sudden problem through periodic backup. Plan to protect data safely for this should be readied. This paper proposes system that can make effective backup environment because consider environment and achievement method in these data management side. That is, implements system that achieve data backup and restore selectively in internet. Backup and restore about data had stored through browser even if disconnect directly to server because it is possible connect in internet by advantage of proposing system are available. Also, This system can manage data stored to server more efficiently by using selectively both backup and restore two methods.
Conventional caption extraction methods use the difference between frames or color segmentation methods from the whole image. Because these methods depend heavily on heuristics, we should have a priori knowledge of the captions to be extracted. Also they are difficult to implement. In this paper, we propose a method that uses little heuristic and simplified algorithm. We use topographical features of characters to extract the character points and use MST(Minimum Spanning Tree) to extract the candidate regions for captions. Character regions are determined by testing several conditions and verifying those candidate regions. Experimental results show that the candidate region extraction rate is 100%, and the character region extraction rate is 98.2%. And then we can see the results that caption area in complex images is well extracted.
In this paper, we implement the ad ballon control system in which a built camera monitors using bluetooth wireless communication with ISM(Industrial, Scientific and Medical) band. In the proposed system, the driving time of ad ballon is increased by adopting the mercury battery and light weight. An ad ballon with camera is easily controlled by Graphic User Interface using Linux based embedded system.
Most conventional database systems support specific queries that are concerned only with data that match a query qualification precisely. A cooperative query answering supports query analysis, query relaxation and provides approximate answers as well as exact answers. The key problem in the cooperative answering is how to provide an approximate functionality for alphanumeric as well as categorical queries. In this paper, we propose a metricized knowledge abstraction hierarchy that supports multi-level data abstraction hierarchy and distance metric among data values. In order to facilitate the query relaxation, a knowledge representation framework has been adopted, which accommodates semantic relationships or distance metrics to represent similarities among data values. The numeric domains also compatibly incorporated in the knowledge abstraction hierarchy by calculating the distance between target record and neighbor records.
Our middle manufacturing industry need a master data management solution to adjust the changing industry environment effectively. In this paper, we development the master data management solution which has an user interface to use conveniently and has a standard data architecture for the efficient connection among various systems. Also this solution composed of the automated connection module which can make an intermediate language based on the standard data architecture and composed of the extensible production data management to improve the extensibility. This solution can provide an efficient progress information of work which was not managed by officer until now as well as can provide stable system building when we want to extend the system.
This paper presents a new class of activation functions for Cascade Correlation learning algorithm, which herein will be called CosGauss function. This function is a cosine modulated gaussian function. In contrast to the sigmoidal, hyperbolic tangent and gaussian functions, more ridges can be obtained by the CosGauss function. Because of the ridges, it is quickly convergent and improves a pattern recognition speed. Consequently it will be able to improve a learning capability. This function was tested with a Cascade Correlation Network on the two spirals problem and results are compared with those obtained with other activation functions.
In this paper, we propose the optimal migration path searching method including path adjustment and reassignment techniques for an efficient migration of mobile agent which has an autonomous ability for some task processing. In traditional agent system, if the various and large quantity of task processing were required from the users, it could be derive excessive network traffic and overload due to increasing the size of agent. And also, if an agent migrates from node to node according to routing schedules specified by the user at which the most of network traffic occurs, it could not actively cope with particular situations such as communication loss and node obstacles and required much cost for node traversal. Therefore, this paper presents the migration method of agent which can try to adjust and reassign path tothe destination automatically through the optimal path search using by network traffic perception in stead of the passive routing schedule by the user. An optimal migration path searching and adjustment methods ensure the migration reliability on distributed nodes and reduce the traversal task processing time of mobile agent by avoiding network traffic paths and node obstacles.
The construction of an emergency generator's diagnosis system for the preparation of emergency in nuclear plant is vital. To construct a knowledge base of the diagnosis system, the classes and a causality model should be designed. In order to design those elements, at first, object of the diagnosis system should be defined. After the investigation of normal and abnormal states, the external knowledge such as entities and activities is extracted, that the operational principle of the system. For the conversion of the extracted external knowledge to the internal one, the entities are defined as classes and the activities converted into the causality. Through the recursive configuration of the causality and proper examination, the diagnosis knowledge applicable to the knowledge base is completed. In this paper, it is possible to construct a knowledge base with high portability since the independence of design model is considered through the decision table
Multimedia data is increasing rapidly by development of computer Information technology. Specially, quick and accurate processing of image data is required in image retrieval field. But it is difficult to guarantee both quickness and accuracy. This article suggests the algorithm that extracts representative features of image using genetic algorithm to solve this problem. This algorithm guarantees quickness and accuracy of retrieval by extracting representative features of image. We used color and texture as feature of image. Experiment shows that feature extracting method that is proposed is more accurate than existing study. So this study establishes propriety of method that is proposed.
In this paper, we introduce an improved algorithm for computing matrix triple product that commonly arises in primal-dual optimization method. In computing P = AHAt, we devise a single pass algorithm that exploits the block diagonal structure of the matrix H. This one-phase scheme requires fewer floating point operations and roughly half the memory of the generic two-phase algorithm, where the product is computed in two steps, computing first Q=HAt and then P=AQ. The one-phase scheme achieved speed-up of 2.04 on Intel Itanium II platform over the two-phase scheme. Based on memory latency and modeled cache miss rates, the performance improvement was evaluated through performance modeling. Our research has impact on performance tuning study of complex sparse matrix operations, while most of the previous work focused on performance tuning of basic operations.
In this paper, we present an estimation method of a face pose by using two camera images. First, it finds corresponding facial feature points of eyebrow, eye and lip from two images. After that, it computes three dimensional location of the facial feature points by using the triangulation method of stereo vision techniques. Next, it makes a triangle by using the extracted facial feature points and computes the surface normal vector of the triangle. The surface normal of the triangle represents the direction of the face. We applied the computed face pose to display a 3D face model. The experimental results show that the proposed method extracts correct face pose.
In this paper, CPLD low power technology mapping using reuse module selection under the time constraint is proposed. Traditional high-level synthesis do not allow reuse of complex, realistic datapath component during the task of scheduling. On the other hand, the proposed algorithm is able to approach a productivity of the design the low power to reuse which given a library of user-defined datapath component and to share of resource sharing on the switching activity in a shared resource. Also, we are obtainable the optimal the scheduling result in experimental results of our using chaining and multi-cycling in the scheduling techniques. Low power circuit make using CPLD technology mapping algorithm for selection reuse module by scheduling
An image retrieval system retrieves and offers same of similar image based on various features of image. This paper present a brand image retrieval system based on color and shape of image. We use the image for a color information by dividing into the area and extracting the area color distribution histogram. We use for the shape information by preprocessing of the boundary extraction, the centroid extraction, angular sampling etc. and calculating of the sum of the distance from the centroid to the boundary, the standard deviation, and the rate of long axis to short axis. We accomplish the retrieval through a similarity measurement by using the color and shape information which is extracted in this way.
In this paper, we suggest a process that transforms non-component Java programs into EJB component programs. We approach following methods to increase reusability of existing Java-based programs. We extract proper factors from existing non-component Java programs to construct for component model, and we suggest a transformation technique using extracted factors. Extracted factors are transformed into EJB components. With consideration for reusability of existing programs and EJB's characteristic, we suggest a process that mixes class clustering and method oriented class restructuring.
We reviewed the collision event with pc-based shooting games and existing collision detection algorithms. Then we proposed a new collision check technique using a small quadrilateral unit, by which existing quadrilateral collision detection techniques can be made up for. For demonstration we implemented a simple shooting game having its screen output. We proved that the proposed technique can be applied to real computer games by means of showing the experimental results and screen outputs from implemented real games.
This paper presents a method that allows for detection of all rapid and gradual scene changes. The method features a combination of the current color histogram and the local χ2-test. For the purpose of this paper, the χ2-test scheme outperforming existing histogram-based algorithms was transformed, and a local χ2-test in which weights were applied in accordance with the degree of brightness was used to increase detection efficiency in the segmentation of color values. This Method allows for analysis and segmentation of complex time-varying images in the most general and standardized manner possible. Experiments were performed to compare the proposed local χ2-test method with the current χ2-test method.
Today, Internet covers a world wide range and most appliances of our life are linked to network from enterprise server to household electric appliance. Therefore, the importances of administrable framework that can grasp network state by real-time is increasing day by day. Our objective in this paper is to describe a network weather report framework that monitors network traffic and performance state to report a network situation including traffic status in real-time. We also describe a mobile agent architecture that collects state information in each network segment. The framework could inform a network manager of the network situation. Through the framework, network manager accumulates network data and increases network operating efficiency.
Interference occurs by signals received from directions that were different from the signals of the users in a mobile communication system. Various studies have been undertaken, including diversity, equalizer, etc., in order to reduce interference. In this study, the weighted value of the array antenna was obtained to improve signal-to-noise ratio. The weighted value was obtained as an eigen value and an eigen vector by using the correlation coefficient of the signal. The weighted value obtained was then applied to the CDMA system to increase system performance and capacity. Both QPSK and OQPSK modulation systems were applied to analyze performance.
Information communication technology ultimately pursues ubiquitous environment where people and devices are connected in networks and exchange of information are possible. However there have not been sufficient studies on the technology, in spite of higher enthusiasm about education and remarkable educational potentials of ubiquitous technology. In order to effectively utilize the ubiquitous technology, this thesis explored field trip through a constructivism approach and examined ways how to use and integrate ubiquitous technology in developing educational model and system.
Pervasive computing environment is similar to the meaning of ubiquitous computing, however it is a kind of the commercial product, which is made from the collaboration between NIST and IBM. On the basis of this environment, the research of mobile agents for intrusion detection is going on in progress. In this paper, we study the research about mobile agents for the intrusion detection and then suggest scenarios using moving mobile agents based on the multiple mobile agents in the intrusion detection. Subsequently, we could figure out the problems which occurred through progress of integrity movement as a matter of the intrusion detection.
Current context-aware applications In ubiquitous computing environments make the assumption that the context they are dealing with is correct. However, in reality, both sensed and interpreted context informations are often uncertain or imperfect. In this paper, we propose a probability extension model to ontology-based model for representing uncertain contexts and use Bayesian networks to resolve about uncertainty of context informations. The proposed model can support the development and operation of various context-aware services, which are required in the ubiquitous computing environment.
This study was conducted to identify a strategy on using the high quality human resources for single PPM quality renovation of advancing Korean companies into China. According to the results of the study, it is very important for Korean managers to adapt themselves to Chinese circumstances. The best strategy on using the high quality human resources for single PPM quality renovation in China is employment on the spot.
This study is the first research for the lawyers' office service quality in Korea. Several T-tests' results show that the performance of lawyers office service was perceived much lower than the expectation of customers. A causal analysis was conducted to identify which service quality factors influence customers' satisfaction of the lawyer's office in Korea. The results of the analysis show that tangible and empathy service quality factors are the most important factors for the lawyers' customers' satisfaction. Thus, lawyers' office should invest some money in selecting location and building including interiors to increase customers' satisfaction in Korea. And then, lawyers should improve the empathy quality factor which make their customers feel their law office comfortable and helpful to them.
This study proposes a new diversity algorithm to improve the signal-to-noise ratio. In the wireless channel, if fading occurs due to the multipaths, the performance of the system is apparently reduced. One of the methods to reduce fadings like this is the diversity method, and this study aims to improve the performance of the system by proposing a new diversity algorithm. This study applied rake receiver. It applied QPSK and OQPSK modulation methods and applied the convolutional codes, where the code rate is 1/3 and the constraint length is 9, and the turbo code where the constraint length is 4. Under these conditions, this study compared and analyzed the average error probability of Multicarrier CDMA system.
Even though software developing environment has been changing to Web basis very fast, there are just few studies of quality metric or estimation model for Web software. In this study after analyzing the correlation between the risk level and property of objects using linear regression, six middle sized industrial system has been used to propose the correlation model of size and Number of Classes(NOC), size and Number of Methods(NOM), complexity and NOC, and complexity and NOM. Among of six systems 5 systems(except S06) have high correlation between size(LOC) and NOM, and four systems(except S04 & S06) have high correlation between complexity and NOC / NOM. As Web software architecture with three sides of Server, Client and HTML, complexity of each sides has been compared, two system(S04, S06) has big differences of each sides complexity values and one system(S06) has very higher complexity value of HTML. So the risk level could be estimated through NOM to improve maintenance in case of that the system has no big differences of each sides complexity.