When Web-based special retrieval systems for scientific field extremely restrict the expression of user's information request, the process of the information content analysis and that of the information acquisition become inconsistent. Accordingly, this study suggests the re-ranking retrieval model which reflects the content based similarity between user's inquiry terms and index words by grasping the document knowledge structure. In order to accomplish this, the former constructs a thesaurus and similarity relation matrix to provide the subject analysis mechanism and the latter propose the algorithm which establishes a search model such as query expansion in order to analyze the user's demands. Therefore, the algorithm that this study suggests as retrieval utilizing the information structure of a retrieval system can be content-based retrieval mechanism to establish a 2-step search model for the preservation of recall and improvement of accuracy which was a weak point of the previous fuzzy retrieval model.
In this study, it proposes the neural network system for character recognition and restoration. Proposes system composed by recognition part and restoration part. In the recognition part, it proposes model of effective pattern recognition to improve ART Neural Network's performance by restricting the unnecessary top-down frame generation and transition. Also the location feature extraction algorithm which applies with Hangeul's structural feature can apply the recognition. In the restoration part, it composes model of inputted image's restoration by Hopfield neural network. We make part experiments to check system's performance. respectively. As a result of experiment, we see improve of recognition rate and possibility of restoration.
As computing environments are more rapidly developed, an adaptive and intelligent services using post PC such as PDA, Laptop, and Tablet PC in case of rounding and examining patients are highly demanded. The objective of this study is to design and implement a context-awareness support system based on voice service for medical environments. To achieve it, we propose a context middleware which plays an important role in recognizing a client with PDA by using a Bluetooth wireless communication technology as well as in executing an appropriate execution module, like delivery for diagnosis information of patients, according to the staff's context acquired from a context server. In addition, the context server functions as a manager that efficiently stores context information such as client's current status, physical environment, and device resources into a database server. Finally, for verifying the usefulness of the proposed system, we develop an application system which provides voice playing services for notification of other physicians through our context middleware.
According to the increasing interests of medical information services for healthy living, the multimedia authoring tools for medical information service contents are strongly required. In this paper, a new multimedia authoring tool supporting user-friendly interfaces is implemented, which is based on SMIL(Synchronized Multimedia Integration Language) producing Web-based multimedia contents. The implemented authoring tool in this paper not only provides functionalities to play, verify and modify the partial contents immediately but also offers capabilities to insert multimedia objects into contents with ease. The multimedia contents containing disparate healthcare and medical information can be easily designed with this tool. The enhanced usability of this tool can contribute to the realization of diverse medical information services.
With the development of wearable(ubiquitous) computers, those traditional interfaces between human and computers gradually become uncomfortable to use, which directly leads to a requirement for new one. In this paper, we study on a new interface in which computers try to recognize the gesture of human through a digital camera. Because the method of recognizing hand gesture through camera is affected by the surrounding environment such as lighting and so on, the detector should be a little sensitive. Recently, Viola's detector shows a favorable result in face detection, where Adaboost learning algorithm is used with the Haar features from the integral image. We apply this method to hand area detection and carry out comparative experiments with the classic method using skin color. Experimental results show Viola's detector is more robust than the detection method using skin color in the environment that degradation may occur by surroundings like effect of lighting.
An accurate system modeling in the underground water analysis requires many accurate parameters on the spot, which have a huge volume, because it may be generated more inaccurate products than to use mathematical analytical solution in a case that a degree of permeation , undercurrent coefficients, boundary conditions, and so on, are inadequately estimated. Recently, handling these parameters easily has been an active area of research. In this paper, we propose a new method which handles these parameters easily and accurately for a system model management using a well-known MODFLOW model. Also, we incorporate this method into ArcView functions. Results of the proposed system incorporated into ArcView are displayed visually.
Moving-objects databases should efficiently support database queries that refer to the trajectories and positions of continuously moving objects. To improve the performance of these queries, an efficient indexing scheme for continuously moving objects is required. To my knowledge, range queries on current positions cannot be handled by the 3D R-tree and the TB-tree. In order to handle range queries on current and past positions, I modified the original 3D R-tree to keep the now tags. Most of spatio-temporal index structures suffer from the fact that they cannot efficiently process range queries past positions of moving objects. To address this issue, we propose an access method, called the Tagged Adaptive 3DR-tree (or just TA3DR-tree), which is based on the original 3D R-tree method. The results of our extensive experiments show that the Tagged Adaptive 3DR-tree outperforms the original 3D R-tree and the TB-tree typically by a big margin.
Digital copyright protection is important in the protective side of effort to have leaned to in property right and it which are intelligent of an author and is very important for development of digital contents industry. It is the misgovernment that a reproduction is easy as for the digital contents, and the original and a copy have the same characteristic, and it is so, and is experiencing what is hard for protection of copyright and large quantity illegal copy and illegal distribution prevention. CDN needs a copyright protective plan about the digital contents that digital contents providers were written jointly.The paper used a non-repudiation multi-technique for copyright protection about the digital contents which were written jointly. The Non-repudiation multi-technique proved efficiency about this.
This study showed the possibility of the medical treatment by thermal feedback as the laser medical treatment had given by design of the digital I/O interfaces of the electronic shutter to control the laser beam and the temperature controlled algorithm. The electronic shutter is economical and that is designed to be automatically controlled within the range of an extent temperature by such development of its driving interfaces and the controlled algorithm of the electronic shutter. The possibility of local therapy for the patients by the treatment of the laser beam within an extent temperature controlled, is proposed by improvement of the problems on the current treatment methods such as radiotherapy, high frequency treatment or medical therapy of drug stuffs which even kill the normal cells.
Excessive traffic of P2P applications in the limited communication environment is considered as a network bandwidth problem. Moreover, Though P2P systems search a resource in the phase of search using weakly connected systems(peers' connection to P2P overlay network is very weakly connected), it is not guaranteed to download the very peer's resource in the phase of download. In previous P2P search algorithm, we had adopted the heuristic peer selection method based on Random Walks to resolve this problems. In this paper, we suggested an adaptive P2P search algorithm based on the previous algorithm to reform the node distribution rate which is affected in unit peer ability. Also, we have adapted the discriminative replication method based on a query ratio to reduce traffic amount additionally. In the performance estimation result of this suggested system, our system works on a appropriate point of compromise in due consideration of the direction of searching and distribution of traffic occurrence.
The reuse of a programs is classified into white-box reuse to reuse with modification and black-box reuse to reuse without modification. A component in component-based development has the property of black-box reuse. In order to measure resuability of class and component, we must consider all the procedural and object-oriented attribute.In this paper, we propose a new model for measurement of class and component reusability and the measure criteria. A component that is measured by proposed model can know a degree of reuse and we can select which component is high in resuability.
Face detection in real-time video constitutes one of the major trend in face recognition. In this paper, we propose a face detection algorithm using the skin color and Haar-like feature in real-time video. The proposed algorithm is followed by three sequences; First, moving objects are detected by difference-method in YCbCr coordinates, and then by using Haar-like features, face candidate regions of the moving objects is selected. Finally we extract the most possible face candidates by comparing the pixel values of face candidates with the skin color. In order to prevent a mistake, we use similar features or skin color to detect a face by selecting a adaptive ROI and improve the processing speed in real-time video. The computer simulation shows the validity of the proposed method that the processing speed is improved by 30% than previous works and the detection success rate is 96.8%.
A version control system is used in a rapidly changed environment or a program which developed in a complicated environment. And configuration thread information supporting and it's processing method has an important part in version control. Configuration thread tool such as a system model of DSEE, a view of ClearCase, a label of SourceSafe, and the package of CCC/Harvest have applied to formalized configuration rule by user and obtained a desired configuration information of the version. But it is a problem of configuration thread in supporting information that we, in this method, can't know a exactly well-defined configuration rule information and a predefined information. And these information have a demerit that can't supported the close connection along with undefined version and a meta-information. In this paper, we have modeling a system for these probelms to solve and a efficiently configuration thread supported. We also proposed a mixed retrieval model included a boolean retrieval model and a vector retrieval model for support efficiently configuration thread information. We applied and designed the libraries using extended facet method.
An edge is where the intensity of an image moves from a low value to high value or vice versa. The edges tell where objects are, their shape and size, and something about their texture. Many traditional edge operators are derivative based and perform reasonably well for simple noise-free images. In recent, statistical edge detectors for complex images with noises have been described. This paper compares and analysis the performance of statistical edge detectors based on the T test and Wilcoxon test, and mathematical edge detectors based on Sobel operator, and the well-known Canny detector and Wavelet transformation detector, and provides the implementation of these edge detectors using Java on the web.
Video conference system it is various at internet and uses the reading is become accomplished. Research of like this portion synchronization of audio, the compression technique and multimedia data, supports the video conference the research of the Mbone of the IP multicast for being active, being become accomplished the multimedia service which is various an video from internet, the line speed of communication becomes high-speed anger and to follow leads is become accomplished. The video conference from opening elder brother dispersion internet network environment the problem against the image which is an image conference data and a voice security is serious and it raises its head. To sleep it presents the security method which from the video conference it follows in quality of multimedia data from the dissertation which it sees and it does.
Computer center of university or company manages many non fault-tolerant servers and network devices to spare expenses. Because a service fault occurs sometimes by worm virus, system bug etc, we need a technique to detect it for continuing service. This paper introduces design and implementation of the system to observe many heterogeneous services, and web-based interface improving convenience of system manager. A system fault is reported to system managers via email or SMS by introduced service observation system, not service user. Then system managers can recover the system fault by this notification and minimize a fault period.
WThe education information have been increased. Accordingly, the necessary of developing en education information metadata standards has been increased. By the reason, the Korea Education & Research Information Service developed KEM(Korea Educational Metadata) 2.0. And MPEG-7 was developed to describe metadata of multimedia data. In this paper, we developed a education information image retrieval system. This system used XML schema to accept education information image metadata. We integrated contents-based retrieval and a semantic-based retrieval to overcome there problems that content-based retrieval system can not support semantic-based retrieval and a semantic-based retrieval can not support content-based retrieval. As a results, we expect to handle metadata more efficiently.
High speed computer, large scale storage device and high speed computer network are computing infrastructure which we can easily access to in these days. However, many computer simulations in natural or applied science such as molecular simulation require more computing power as well as larger scale of storage. Grid computing which is a next generation of distributed computing environment, is one of solution for the new requirements. Even though many researches have been going on Grid computing, those are oriented to communication interface and protocols, and middleware like globus tool kits[2,3]. Therefore research on application level platform or application itself is yet premature and it makes real users be difficult to utilize Grid system for their research. In this paper, we suggest a new user environment and an abstract job model for simulation experiments on MGrid(Molecular Simulation Grid). It will make users be able to utilize Grid resources efficiently and reliably.
The continuous development of computing technique and network technology bring the explosive growth of the Internet, it accomplished the role which is import changes the base facility in the social whole and public infra, industrial infrastructure, culture on society-wide to Internet based environment. Recently the rapid development of information and technology environment is quick repeated the growth and a development which is really unexampled in the history. but it has a be latent vulnerability. Therefore the damage from this vulnerability like worm, hacking increases continually. In this paper, in order to resolve this problem, implement the analysis system for harmful traffic for defending new types of attack and analyzing the traffic takes a real-time action against intrusion and harmful information packet.
Due to grammatical similarities, even a one-to-one mapping between Korean and Japanese morphemes can usually result in a high quality Korean-to-Japanese machine translation. So most of Korean-to-Japanese machine translation are based on a one-to-one mapping relation. Most of Korean '-hada' verbs, which consist of a noun and '-hada', also correspond to Japanese '-suru‘ verbs, which consist of a noun and '-suru', so we generally use one-to-one mapping relation between them. However, the applications only by one-to-one mapping may sometimes result in incorrect Japanese correspondences in some cases that Korean 'hada' verbs don't correspond to Japanese 'suru’verbs. In these cases, we need to handle a noun and '-hada' as one translation unit. Therefore, this paper examined the characteristics of Korean '-hada' verb and proposed transfer rules of Korean 'hada' verb, applying for various states of input sentences such as discontinuity due to inserted words between a noun and '-hada', passivization, and modification of '-hada' verb. In an experimental evaluation, the proposed method was very effective for handling '-hada' verb in a Korean-to-Japanese machine translation, showing high quality of translation results.
In this paper, we propose a method to calculate camera motion parameter, which is based on efficient invariant features irrelevant to the camera veiwpoint. As feature information in previous research is variant to camera viewpoint, information content is increased, therefore, extraction of accurate features is difficult. LM(Levenberg-Marquardt) method for camera extrinsic parameter converges on the goal value exactly, but it has also drawback to take long time because of minimization process by small step size. Therefore, in this paper, we propose the extracting method of invariant features to camera viewpoint and two-stage calculation method of camera motion parameter which enhances accuracy and convergent degree by using camera motion parameter by 2D homography to the initial value of LM method. The proposed method are composed of features extraction stage, matching stage and calculation stage of motion parameter. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed algorithm.
In this paper, we propose a new automatic fingerprint identification system that identifies individuals in large databases. The algorithm consists of three steps: preprocessing, classification, and matching, in the classification, we present a new classification technique based on the statistical approach for directional image distribution. In matching, we also describe improved minutiae candidate pair extraction algorithm that is faster and more accurate than existing algorithm. In matching stage, we extract fingerprint minutiaes from its thinned image for accuracy, and introduce matching process using minutiae linking information. Introduction of linking information into the minutiae matching process is a simple but accurate way, which solves the problem of reference minutiae pair selection in comparison stage of two fingerprints quickly. This algorithm is invariant to translation and rotation of fingerprint. The proposed system was tested on 1000 fingerprint images from the semiconductor chip style scanner. Experimental results reveal false acceptance rate is decreased and genuine acceptance rate is increased than existing method.
In order to move NPC's to target location at game maps, various algorithm including A* has been used. The most frequently used algorithm among them is A* with fast finding speed. But A* has the following problems. The first problem is that at randomly changing map, it is necessary to calculate all things again whenever there are any changes. And when calculation is wrong, it is not possible to search for target. The second problem is that it is difficult to move avoiding dangerous locations damaging NPC such as an obstruction. Although it is possible to avoid moving to locations with high weight by giving weight to dangerous factors, it is difficult to control in case NPC moves nearby dangerous factors.In order to solve such problems, in this thesis, the researcher applied Dynamic Programming to path-finding algorithm. As the result of its application, the researcher could confirm that the programming was suitable for changes at the map with random change and NPC's avoided the factors being dangerous to them far away. In addition, when compared to A*, there were good results.
Data storage was thought to be inside of or next to server cases but advances in networking technology make the storage system to be located far away from the main computer. In Internet era with explosive data increases, balanced development of storage and transmission systems is required. SAN(Storage Area Network) and NAS(Network Attached Storage) reflect these requirements. It is important to know the capacity and limit of the complex storage network system to get the optimal performance from it. The capacity data is used for performance tuning and making purchasing decision of storage. This paper suggests an analytic model of storage network system as queuing network and proves the model though simulation model.
One of the important issues of in this research is effective usage of energy to increase life time of nodes which form a network. Existing LEEM protocol causes unnecessary active time due to small packets with shorter transfer time than active interval period of node and packets with transfer time of more than twice of active interval period of node. In this paper, we propose Energy-Efficient MAC by Reservation (EEMR) protocol which can increase energy effectiveness in wireless sensor network environment by reducing unnecessary active time using a method that reserves next-hop depend upon the size of packet. We evaluated effectiveness of our proposed method through experiments. The result showed that using EEMR protocol had better energy effectiveness than existing LEEM protocol by 15%.
The reliability and efficiency of network must be considered in the large scale wireless sensor networks. Broadcast method must be used rather than unicast method to enhance the reliability of networks. In recently proposed GRAB (GRAdient Broadcast) can certainly enhance reliability of networks by using broadcast but its efficiency regarding using energy of network is low due to using only one sink. Hence, the lifetime of networks is reduced. In the paper we propose the scheme of SMSGB (Selective Multi Sink Gradient Broadcast) which uses single sink of multi-sink networks. The broadcast based SMSGB can secure reliability of large scale wireless sensor networks. The SMSGB can also use the network's energy evenly via multi sink distribution. Our experiments show that using SMSGB was reliable as GRAB and it increased the network's lifetime by 18% than using GRAB.
A Markov model for the IEEE 802.11 standard which is the most widely deployed wireless LAN protocol, is designed and the channel throughput is evaluated. The DCF of 802.11, which is based on CSMA/CA protocol, coordinates transmissions onto the shared communication channel. In this paper, under a finite load traffic condition and the assumption of packet loss after the final backoff stage, We present an algorithm to find the transmission probability and derive the formula for the channel throughput. The proposed model is validated through simulation and is compared with the case without packet losses.
In this paper, we propose a methodology to estimate the number of users in e-Commerce systems. There have been a lot of previous work under the closed-LAN system environment. But the study on the number of acceptable users in real network environment is hard to find in the literature. Our research applies a Hybrid Simulation by using QoS results for end-to-end high-speed Internet service, and experiments are performed with regard to LAN and WAN, network equipments, and various network bandwidth. As result of the experiments we observed that the response time of high-speed Internet service media(Wireless LAN, ADSL, Cable, VDSL) depends heavily on the sequence and depth of transactions and on the ratio of transactional and non-transactional interactions. That is, as the network and application get more loads, the number of acceptable users decreases. By adding a cache server and an L4 switch to the simulation model, we analysed the changes in the number of users and client response time.
In this paper, we propose an overlay transmission method of end-to-end host to solve decrease in transmission rate caused by congestion in the application using multicast. In this proposed method, we've selected an overlay end-to-end host (OEH) for overcast transmission for each node, and the OEH can transmit duplicative packets. When the loss rate is more than the overcast threshold, the receivers of node in congestion are dropping from current layers and the OEH of lower nodes can request overcast transmission to OEH of non-congestion nodes for receiveing packets. In simulation results, it was known that the proposed method improves transmission rates over those of existing methods.
Whenever mobile node moves a new domain in multicast environments, Both handoff and join multicast group always occur. These procedures take a much of delay time and lost the packets in flight. Handoff delay is a significant factor fot the QoS of mobile node in mobile environments. In this paper, we propose a new handoff scheme which supports multicast and guarantees a low handoff delay in IPv6. The scheme makes adjacent subnets the member of multicast tree. After that, this eliminates packet loss and reduces the handoff delay time. Simulation shows that the proposed scheme takes a low delay and lower packet loss rate than the remote-subscription scheme
A Core Study on the Introduction of CRM to use e-contents to the Local Government. This study extracted the core principles about the Introduction of CRM that used E-Contents fundamental technology in the local government, and showed the considerations and the examined details to introduce the real CRM.. It integrated the success factors of CRM and the source that took the success by using the intentional result in order to build the concept model for the Introduction of CRM. And the concept model of CRM showed what factors can gain according to an administrative organ and the function and role of the administration. This study result will be the base for experienced survey in case that the local government will introduce CRM to use e-contents fundamental technology hereafter. And it will offer the foundation to show the directions about application of CRM in the local government and to apply the standards and the administrative guidelines to the CRM's officials.
Spyware is any software which employs a user's Internet connection in the background without their knowledge or explicit permission. The installation of spywares is generally done in a sneaky, misleading or unannounced manner. It does not only compromise the security and privacy of affected users but also be an obstruction to the digital convergence and ubiquitous computing environments. This paper provides a summary of the definition, status, risk analysis, and security controls of the spywares. Furthermore, this paper suggests additional controls which should be considered at an individual, organizational and national perspective.
The purpose of this study is to derive critical success factors for ERP system implementation by integrating managerial, technical, human resource and organizational culture factors which have been proposed as influencing factors for the performance of ERP system implementation in previous studies. The main results of this study are as follows. First, it derives 33 success factors through comprehensive review of various factors which may affect ERP system implementation performance, and categorizes them into one of three stages : preparation stage, implement stage, and settle-down and stabilization stage. Second, this study tests whether there are different correlations or not between the success factors of each ERP system implementation stage and ERP system implementation performance depending upon the strategies of ERP system implementation. As a result, it is shown that some of success factors have significantly different correlations with performance variables in accordance with BPR implementation timing.
UDDI server is the web services registry enabling users to register and search for web services. However, the existing UDDI servers do not provide any information about web services quality. We designed and developed a UDDI broker system which actively monitors and analyses the quality of web services. The analysis results are presented to users in statistical figures and graphs. With this information a user can select a web service that meets his/her needs. Availability, performance, and stability were the metrics used for the service quality measurement and analysis.
In this paper collaboration of web-based distributed business systems is analyzed and the need of timely collaboration is derived and described in terms of inter-organizational contracts. A method of event-condition-action (ECA) rule based timely collaboration to meet the need and an active functionality component (AFC) to provide the method are proposed. The proposed method supports high level rule programming and event-based immediate processing so that system administrators and programmers can easily maintain the timely collaboration independently to the application logic. The proposed AFC uses HTTP protocol to be applied through firewalls. It is implemented using basic trigger facilities of a commercial DBMS for practical purpose.
An example of such a new device is a mobile phone. The demand for wireless data communication is growing rapidly. However, agencies have not yet completed the standardization of the markup language. Due to the evolution of the Mobile Device, the agencies in this field have provided different data formats with each Mobile Device Platform. Traditionally, contents are hand-tailored to suit the target devices. A key problem is that the characteristics and capabilities of mobile devices are too diverse to service the most suitable mobile contents. Owing to this problem, the need for a re-usable document description language increases. In this paper, we defined Template file that is common data to service mobile devices. We proposed method that could be effective wireless web service though design and the implementation of the Call Manager & the XSL Generator. In the methodology, when requesting wireless internet service, the mobile device finds out Markup language and hardware specification of the mobile device through the Call Manager component supports. The XSL Generator component creates XSL file dynamically that is the most suitable to device. Finally, contents is serviced to each device by XSLT.
Ubiquitous learning environment needs various new model of e-learning as web based education system has been proposed. The demand for the customized courseware which is required from the learners is increased, the needs of the efficient and automated education agents in the web-based instruction are recognized. But many education systems that had been studied recently did not service fluently the courses which learners had been wanting and could not provide the way for the learners to study the learning weakness which is observed in the continuous feedback of the course. In this paper we propose a multi-agent system for course scheduling of learner-oriented using weakness analysis algorithm via personalized ubiquitous environment factors. First proposed system analyze learner's result of evaluation and calculates learning accomplishment. From this accomplishment the multi-agent schedules the suitable course for the learner. The learner achieves an active and complete learning from the repeated and suitable course.
As the development of information communication technology and thus the growth of security incidents, there has been increasing demand on developing methodologies and tools for measuring the information security level of organizations for the efficient security management. However, most works from foreign countries are not realistic in constructing the checklists, moreover their tools provide neither the ease of use nor the inexpensiveness, and most domestic works are not properly considering the characteristics of the organizations when measuring the information security level. In this study, an efficient information security levelling tool is suggested, which applies the multiple variable weights for security levelling according to the characteristics of organizations and the fuzzy technique to reduce the user's subjectivity and the genetic algorithm to establish the security countermeasure.
In this paper, a new structural approach to on-line signature verification is presented. A primitive pattern is defined as a part segmented by a local minimal position of speed. And a structural description of signature is composed of subpatterns which are defined as such forms as rotation shape, cusp shape and bell shape, acquired by composition of the primitives regarding the directional changes. As the matching method to find identical parts between two signatures, a modified DP(dynamic programming) matching algorithm is presented. And also, variation and complexity of local parts are computed from the training samples, and reference model and decision boundary are derived from these. Error rate, execution time and memory usage are compared among the functional approach, the parametric approach and the proposed structural approach. It is found that the average error rate can be reduced from 14.2% to 4.05% when the local parts of a signature are weighted and the complexity is used as a factor of decision threshold. Though the error rate is similar to that of functional approaches, time consumption and memory usage of the proposed structural approach are shown to be very effective.