In order to high-performance data processing, effective resource selection is needed since grid resources are composed of heterogeneous networks and OS systems in the grid environment. In this paper, we classify grid resources with data properties and user requirements for resource selection using a decision tree method. Our resource selection method can provide suitable resource selection methodology using classification with a decision tree to grid users. This paper evaluates our grid system performance with throughput, utilization, job loss, and average of turn-around time and shows experiment results of our resource selection model in comparison with those of existing resource selection models such as Condor-G and Nimrod-G. These experiment results showed that our resource selection model provides a vision of efficient grid resource selection methodology.
Nearest-neighbor searches are essential operations in various applications such as multimedia systems and GIS systems. Although numbers of research works for nearest-neighbor search have been proposed, they have a limitation on the performance since they process queries on the fly with indexes on data. This paper proposes a new nearest-neighbor search algorithm based on a grid-based data structure, which preprocesses and stores the result of nearest-neighbor queries using Voronoi diagrams over static data. While traditional techniques try to index data itself, the proposed technique attempts to index the result of the queries. Therefore, it performs nearest-neighbor queries more efficiently.
The man who wants to create advance user created contents is in need of using chroma-key method in a certain environment. Recently, pavilion that is painted with mono-color is used for the application of chroma-key. This paper aims to introduce a robust automatic adaptive chroma-key method without using pavilion. In this research, a new chroma-key method is applied to the separation of high frequency and low frequency using Wavelet is done in order that the noise can be eliminated. As a result, a chroma-key method with good capacity is shown from the research.
For efficient message passing of parallel programs, it is required to schedule the involved two processes at the same time which are executed on different nodes, that is called 'co-scheduling'. However, each node of cluster systems is built on top of general purpose multitasking OS, which autonomously manages local processes. Thus it is not so easy to co-schedule two (or more) processes in such computing environment. Our work proposes a co-scheduling scheme for MPI-based parallel programs which exploits message exchange information between two parties. We implement the scheme on Linux cluster which requires slight kernel hacking and MPI library modification. The experiment with NPB parallel suite shows that our scheme results in 33-56% reduction in the execution time compared to the typical scheduling case, and especially better performance in more communication-bound applications.
Data stream has the tendency to change in patterns over time. Also known as concept drift, such problem can reduce the predictive performance of a classification model. CVFDT and IOLIN tried to solve the problem of a concept drift through incremental classification model updates. The local changes in patterns, however, was revealed to be unable to resolve the problems of local concept drift that occurs by influencing on total classification results. In this paper, we propose adapted IOLIN system that improves system's predictive performance by detecting the local concept drift. The experimental result shows that adaptive IOLIN, the proposed method, is about 2.8% in accuracy better than IOLIN and about 11.2% in accuracy better than CVFDT.
As mobile Web technology becomes more increasingly applicable, the mobile contents market, especially the music downloading for mobile phones, has recorded remarkable growth. In spite of this rapid growth, customers experience high levels of frustration in the process of searching for desired music contents. It affects to a re-purchasing rate of customers and also, music mobile content providers experience a decrease in the benefit. Therefore, in aspects of a customer relationship management (CRM), a new way to increase a benefit by providing a convenient shopping environment to mobile customers is necessary.As an solution for this situation, we propose a new music recommender system to enhance the customers’ search efficiency by combining collaborative filtering with mobile web mining and ordinal scalebased customer preferences. Some experiments are also performed to verify that our proposed system is more effective than the current recommender systems in the mobile Web.
Recently virtualization becomes one of the most popular research topics, and a lot of software products which is related to virtualization have been released accordingly. Server virtualization, which virtualize physical servers to supply many virtual servers, provides very efficient way to build network-based servers. In this paper, we design and implement virtual desktop service which is based on server virtualization. We also propose load balancing scheme for the virtual desktop service. The proposed virtual desktop service and its load balancing scheme provides a cost-effective way to build high-performance remote desktop service.
The telematics data management deals with queries on stream data coming from moving cars. So the stream DBMS should process the large amount of data stream in real-time. In this article, previous research projects are analyzed in the aspects of query processing. And a hybrid model is introduced where query preprocessor is used to process all types of queries in one singe system. Decreasing cost and rapidly increasing performance of devices may guarantee the utmost parallelism of the hybrid system. As a result, various types of stream DBMS queries could be processed in a uniform and efficient way in a single system.
This paper proposes a method to detect scene change using global decision tree that extract boundary cut that have width of big change that happen by camera brake from difference value of frames. First, calculate frame difference value through regional X2-histogram and normalization, next, calculate distance between difference value using normalization. Shot boundary detection is performed by compare global threshold distance with distance value for two adjacent frames that calculating global threshold distance based on distance between calculated difference value. Global decision tree proposed this paper can detect easily sudden scene change such as motion from object or camera and flashlight.
In the game balancing, it is so difficult to choose suitable arms among various actions, or arms and to accurately calculate to which level we adjust the balance. The fuzzy method can be properly used in a particular environment which cannot be correctly processed in mathematics or in lessening the time-consuming problems during the accurate number crunching. Because a variety of actions, relations with opponents, previous battle experiences etc. are not easy to be reflected in every occasion, the fuzzy method could be useful in these cases. When the balancing is needed, the data which have been played to that point are processed by the fuzzy function and calculated to adapt intensity to each action. The ability of characters is regulated in this process. To demonstrate the efficiency of this method, I would like to make clear the excellence of fuzzy method through the following five experiments; a case with invariable ability adjustment, a case adjusted by a randomly chosen action, a case with the strongest weapon selection, a case with the weakest weapon selection and a case with the fuzzy method application.
In this paper, we present a method for the enhancement of marker detection correctness and marker recognition speed by using artificial neural network. Contours of objects are extracted from the input image. They are approximated to a list of line segments. Quadrangles are found with the geometrical features of the approximated line segments. They are normalized into exact squares by using the warping technique and scale transformation. Feature vectors are extracted from the square image by using principal component analysis. Artificial neural network is used to checks if the square image is a marker image or a non-marker image. After that, the type of marker is recognized by using an artificial neural network. Experimental results show that the proposed method enhances the correctness of the marker detection and recognition.
This study was surveyed by using seven variables with two stems of contents-wise aspect and systematic aspect. entertainment, user-oriented contents supply, data up data, useful information supply are the elements of contents-wise aspect and systematic stabilization, convenience of access, promptness of response are for systematic aspect. Results : First, entertainment is the first variable of interactive contents for user to choose IPTV and useful information supply is the second. Also contents wise aspect is more related than the other. Second, The high quality of entertainment, user-oriented contents supply, useful information supply, systematic stabilization, convenience of access, promptness of response are required to enhance the satisfaction of IPTV user. Among them, convenience of access is the most valuable factor for user to choose IPTV.
Even though many studies on parallel rendering based on PC clusters have been done, most of those did not cope with non-uniform scenes, where locations of 3D models are biased. In this work, we have built a PC cluster system with POV-Ray, a free rendering software on the public domain, and developed an adaptive load balancing scheme to optimize the parallel efficiency. Especially, we noticed that a frame of 3D animation are closely coherent with adjacent frames, and thus we could estimate distribution of computation amount, based on the computation time of previous frame. The experimental results with 2 real animation data show that the proposed scheme reduces by 40% of execution time compared to the simple static partitioning scheme.
We suggest CBIR(Content Based Image Retrieval) method using color and shape information. Using just one feature information may cause inaccuracy compared with using more than two feature information. Therefore many image retrieval system use many feature informations like color, shape and other features. We use two feature, HSI color information especially Hue value and CSS(Curvature Scale Space) as shape information. We search candidate image form DB which include feature information of many images. When we use two features, we could approach better result.
Motion blur is a blurring effect on an image caused by the relative motion between the camera and objects in the scene. When an image is captured, motion blurs are caused by relative motion between the camera and the scene. When different objects are moving at different speeds, the characteristics of the blur effect for each object appear differently. To restore the spatially variant blurred image, each of the blur extents should be identified. In this paper, we propose a new method for the identification of blur extent locally using RATS from the image in which the spatially variant motion blur is caused Experiment shows that the proposed algorithm successfully segments the objects with different blurs and identifies the blur extents quite well.
The utilization techniques for multiple transmit and receive antennas or high capacity modulation schemes are essential to cope with the rapidly increasing demand for realizing more diverse wireless communication services with high rates. However, employing multiple receive antennas at the mobile units seems less practical due to the size and power limitations. Therefore, transmit diversity techniques have been extensively investigated for the downlink transmission to improve the performance. In order to overcome the above mentioned problems, we construct a simulation model which combines STC and polarization diversity which scheme is requiring less cost to realize. Multi-level quadrature amplitude modulation (MQAM) is an attractive modulation scheme for wireless communication due to the high spectral efficiency it provides. Thus, the performance for our scheme is presented when 16QAM modulation techniques are applied, and compared with the former schemes.
Execute IP traceback at this paper as target an intruder's attacking that Bypass Attack in order to avoid an exposure of own Real IP address. Design IP traceback server and agent module, and install in Internet network system for Real IP traceback. Set up detection and chase range aggressive loop around connection arbitrariness, and attack in practice, and generate Real IP data cut off by fatal attacks after data and intrusion detection accessed general IP, and store to DB. Generate the Forensic data which Real IP confirms substance by Whois service, and ensured integrity and the reliability that buy to early legal proof data, and was devoted to of an invader. Present the cyber criminal preventive effect that is dysfunction of Ubiquitous Information Society and an effective Real IP traceback system, and ensure a Forensic data generation basis regarding a judge's robe penalty through this paper study.
This paper shows the maximum data flow utilizing the Weight Bipartite Graph Matching system. The Weight Bipartite Graph Matching system sets the data transmission as edges and guides the maximum data flow on the set server and the client. The proposed Weight Bipartite Graph Matching system implements the multi-user interface video conference system. By sending max data to the server and having the client receive the max data, the non-continuance of the motion image frame, the bottleneck phenomenon, and the broken images are prevented due to the excellent capacity. The experiment shows a two-times better excellency than that of the previous flow control.
As the authentication and the integrity methods based on the hash chain are popular, several certificate status validation methods based on the same function are proposed at the moment. In NOVOMODO, a CA generates and releases the hash value to each user. In Jianying Zhou’s framework and Jong-Phil Yang’s framework, a user generates and releases the hash value to verifier. Therefore, the CA loads are distributed to each user. However, these frameworks are based on the assumption that the CA’s secret key is not lost or compromised and the certificates issued by the CA are error-free. Therefore, these frameworks are not suitable in real PKI environments. In this paper, as one hash value generated by CA is included in the user’s certificate in addition, the certificate revocation published by CA using that value can be managed. The hash value included in user’s certificate is the same for all users. The computation costs, the storage amounts and the release costs are small in the CA. And we modify the procedure for the signature and its validation in Jong-Phil Yang’s framework. Our solution is more suitable than those frameworks in real PKI environments.
A navigation system for virtual environments using low-quality HMD(head mounted display) must quantize images when the system presents true-color image with restricted number of colors. Such navigation system quantizes an image by using fixed palette. If the system represents an image by using a variable palette which is made considering a region around the viewpoint then user can perceive a virtual environments more vividly because human visual system is sensitive to the colors variation in the region around the viewpoint. In this paper we propose a color quantization algorithm that quantize a region around the viewpoint more finely than other regions at each variation of viewpoint for virtual environments navigation system.
QoS-support technology in networks is based on measuring QoS metrics which reflect a magnitude of stability and performance. The one-way delay measurement of the QoS metrics especially requires a guarantee of clock synchronization between end-to-end hosts. However, the hosts in networks have a relative or absolute difference in clock time by reason of clock offsets, clock skews and clock adjustments. In this paper, we present a theorem, methods and simulation results of one-way delay and clock offset estimations between end-to-end hosts. The proposed theorem is a relationship between one-way delay, one-way delay variation and round-trip time. And we show that the estimation error is mathematically smaller than a quarter of round-trip time.
The crosstalk is the most serious problem in playing audio signals with more than two speakers. Usually an inverse filter is employed to remove such a phenomenon. The LNS method, one of most effective design techniques for an inverse filter, has some advantages such as easy implementation and quick computation. However, the inverse filter designed by the LNS method is not easy to adapt immediately for the delivery system change since the pre-measured impulse response is used to design the filter. In this work, we present an adaptive algorithm for the inverse filter design. With the present algorithm, the inverse filter is initially designed by the LNS methods and continuously adjusted to cope with the delivery system changes. To verify the proposed method, some simulations were carried out and the results confirmed that the performance of the crosstalk calculation can be improved in entire frequency range.
In this study, we want to composite the source image and the target image when the environment includes water surface in the target image such as lake, sea, etc. The water surface is different from other common environment. On the water surface, the object must be reflected or refract and sometimes is deformed by the wave of water. In order to composite the object in the source image onto the water image, we analyze the water surface of the target image and let the object be synthesized realistically based on the wave of water. Our composite process consists of three steps. First, we use Shape-from-Shading technique to extract the normal vector of the water surface in the target image. Next, the source image is deformed according to the normal vector map. Finally, we composite the deformed object onto the target image.
Currently the problem that cannot attempt convenience anger to inner user and people has been stated in a National Police Agency(NPA) informatization system administered. Executed an inquiry regarding users inner the our country NPA after analyzing hint point a foreign NPA IT informatization organization decreasing by, and analyzed improvement point to measures regarding this problem at these papers, and derived from an efficient strategic plan for NPA informatization integration. As set up informatization vision of NPA and a goal, To utilize so that design a model integrated informatization for advanced a business assistance system of the NPA where these results are objective for the people enhancement, and keep connection with other organization in its mind, and can accomplish a quality elevation of NPA information business, and expect as present improvement plan for efficiency of NPA information business.
Through the development of computer and multimedia, many different kinds of theories about instructional method are presented. Especially, many lectures in associated with E-learning are done. ln mathematics teaching, mathematica-algebra system lecturing, computer and Maple Matlab lecturing etc, are accomplished. There are many controversies among educators in problems originated by computer teaching. In this paper, we going to make a research about problems which are caused by mathematics computer-based education and present a desirable way of computer based education through studying comprehension rate and application ability among learner who uses computer learning and other learner who uses exiting blackboard-based learning with statistical analysis after sampling test
The signal data from human body is very various and much. Because the signals can not occur equally to everyone, so when they analyze their signals, these signals can be used to diagnosis each health sign. In this system, human fingertips are connected with the five vital organs and the six viscera, and if we use these oriental medical diagnosis we can measure the pulse, temperature, and resistant from by reflective photosensor and interpret it's true and false.
General web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on user's request, web cache sends the document to corresponding user. On the contrary, when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then turn it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.
In order to increase the learning effect in minimal time, it is required that the lecturer tailor the materials to suit the needs and achievement levels of each individual. However, in lecture environments such as in junior colleges, many students enroll under one professor, and many courses are open under the same course title at each department, where each enrolled student possesses different academic needs and achievement levels. Therefore, this paper proposes a learning system based on a hypothesis that if a lecturer shares his/her course material to students in other classes of the same subject, and opens up other professors’ grading and marks of the same subject to his students, their achievement levels will improve by utilizing other peers’ achievements and needs. Also, in order to improve the learning performance, we utilized an e-catalog in order to access students’ grading, corrections and coaching, ultimately saving time and cost.
The cyber classes of an e-Learning system have been considered as one of the important form of education. Especially, some of non-major(liberal arts and science) and major subjects are held in cyber classes. However, there is no or little study of effectiveness and function for the students' position. In this study, we analyzed log files in the e-learning system, and classified login and learning hour patterns of students, who were enrolled in the cyber classes, into hourly pattern in a day, daily pattern in a week, and weekly pattern in a semester. We proposed general ideas to improve effectiveness and function of current e-learning. Over 50% of logins were for less than 30 minutes learning and there is wasteful use of e-learning system resources.
This paper is based on the ubiquitous network of telematics technology, equipped with a black box to the car by a unique address given to IPv6. The driver's black box at startup and operation of certification, and the car's driving record handling video signals in real-time sensor signals handling to analyze the records. Through the recorded data is encrypted transmission, and the Ubiquitous network of base stations, roadside sensors through seamless mobility and location tracking data to be generated. This is a file of Transportation Traffic Operations Center as a unique address IPv6 records stored in the database. The car is equipped with a black box used on the road go to criminal cases, the code automotive black boxes recovered from the addresses and IPv6, traffic records stored in a database to compare the data integrity verification and authentication via secure. This material liability in the courtroom and the judge Forensic data are evidence of the recognition as a highly secure, convenient and knowledge in the information society will contribute to human life.
The primary research objects of this study is to evaluate an importance of supplier selection factors as an index according the level of product standardization and to present the evaluation model for supplier selection. For this purpose, this study adopts the AHP method to calculate the importance of supplier selection factors. In this study, 16 factors which affect on the supplier selection decision making are classified into three factors-product supply related factor, product related factor, management ability related factor. Results also indicate that standardized products are much more dealt by means of on-line compare to customized products. It means the buyer opportunities is potentially an increase in the amount of current supplier re-evaluation, which may lead to new suppliers being considered more often.
In a knowledge-information based society, effective management across IT industries, value of information and interoperation are more important than ever before. Therefore, ITA/EA which helps to plan and manage IT systematically and effectively, was enacted and announced in 2005 and has been considered as an innovative IT management approach. Unfortunately, however, new standardization of ITA/EA has not fully discussed yet. How to successfully apply EA/ITA to IT is considered as the most urgent challenge and this study aims to find out the most effective method. To this end, it was emphasized that training systems for specialized experts must be operated systematically through 'standardization of terms and tasks' and 'standardization of manpower training' and certification of specialty must be newly established. In addition, it was strongly demanded that dual structure caused by mixed concepts between ITA and EA must be eliminated. At the same time, a new model for domestic standardization was suggested so that not only ITA/EA can be recognized as IT engineering but also professional engineers can accept ITA/EA as a scientific field. In summary, this study is intended to propose how to improve information management efficiency through ITA/EA.