2024/11/11 更新

タンジョウ ユウイ
丹上 結乃純
TANJO YUI
Scopus 論文情報  
総論文数: 0  総Citation: 0  h-index: 9

Citation Countは当該年に発表した論文の被引用数

所属
大学院工学研究院 機械知能工学研究系
職名
教授
メールアドレス
メールアドレス
外部リンク

研究キーワード

  • 画像処理

  • 自律走行ロボット

  • 人物動作の表現と認識

  • 車載ビジョン

  • MY VISION・一人称ビジョン

  • Ego Motion

  • 3次元画像計測・解析

研究分野

  • 情報通信 / 知覚情報処理  / 知覚情報処理,一人称ビジョン,ロボットビジョン,映像解析,福祉工学,ヒューマンインタフェース,応用健康科学

  • 情報通信 / ヒューマンインタフェース、インタラクション  / 知覚情報処理,一人称ビジョン,ロボットビジョン,映像解析,福祉工学,ヒューマンインタフェース

取得学位

  • 九州工業大学  -  博士(工学)   2000年03月

学内職務経歴

  • 2023年04月 - 2024年03月   九州工業大学   工学部   機械知能工学科     就職担当(知能制御)

  • 2022年04月 - 2024年03月   九州工業大学   大学院工学研究院     副専攻長(制御)

  • 2022年04月 - 2024年03月   九州工業大学   大学院工学研究院     キャンパス・ハラスメント相談員

  • 2022年04月 - 2024年03月   九州工業大学   工学部   機械知能工学科     機械知能工学科長

  • 2021年04月 - 2022年03月   九州工業大学   工学部   機械知能工学科     就職担当(知能制御)

  • 2021年04月 - 2022年03月   九州工業大学   大学院工学研究院     キャンパス・ハラスメント相談員

  • 2019年09月 - 現在   九州工業大学   大学院工学研究院   機械知能工学研究系     教授

▼全件表示

所属学会・委員会

  • 2018年12月 - 現在   電子情報通信学会   日本国

  • 2016年11月 - 現在   バイオメディカル・ファジィ・システム学会   日本国

  • 2002年04月 - 現在   情報処理学会   日本国

  • 2002年04月 - 現在   米国電気電子学会(IEEE)   アメリカ合衆国

論文

  • Development of an Outdoor Autonomous Travel Robot Using Graph-based Segmentation and Deep Learning 査読有り 国際誌

    Kako Koyama, Yui Tanjo

    Proceedings of the 14th International Conference on Electronics, Communications and Networks (CECNet 2024)   1 - 6   2024年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Matsue   2024年11月05日  -  2024年11月08日

    This paper presents a novel approach to autonomous robot navigation using an RGB-D camera technology for complex urban environments. The paper proposes a comprehensive system integrating checkpoint-based path planning, image-based self-localization, and a hybrid method for drivable area estimation. The drivable area estimation technique combines Graph-based Segmentation (GBS), deep learning-based road detection and texture feature analysis. Using 315 diverse urban images, the system’s performance was evaluated by comparing the results of three methods: (i) GBS with texture features, (ii) deep learning alone, and (iii) the proposed hybrid approach. While deep learning achieved the highest overall performance (IoU: 0.832, Accuracy: 0.875), the hybrid approach showed competitive results (IoU: 0.823, Accuracy: 0.870). Importantly, qualitative analysis revealed that the proposed hybrid approach outperformed deep learning in challenging scenarios such as roads adjacent to vehicular traffic and narrow pathways. These findings demonstrate the potential of the proposed approach for robust autonomous navigation in complex urban environments, advancing applications in service robotics.
    Keywords: Robot vision, Region segmentation, GBS, Autonomous travel, Deep learnin

  • Development of a Navigation System Using MY VISION for Visually Impaired People -A Method to Guide the Direction of Travel 査読有り 国際誌

    Yuki Koike, Yui Tanjo

    Proceedings of IEEE International Conference on Systems, Man, and Cybernetics ( IEEE )   1 - 6   2024年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Malaysia   Sarawak   2024年10月06日  -  2024年10月10日

    The number of visually impaired people in Japan as of 2016 was 312,000, and 70.4% of them have experienced accidents such as falls or collisions while going out. In addition, while a white walking cane is commonly used to move around when going out, there is a problem of not being able to recognize a wide range of areas. Many studies have been conducted to support the movement of visually impaired people using camera images, and many methods have been proposed to support walking on sidewalks and crossing intersections. However, when considering walking on a sidewalk, there are cases where the sidewalk is broken by a side street intersecting the roadway, and few studies have focused on such a break in the sidewalk. In many cases, there are no traffic signals or pedestrian crossings at such breaks in the sidewalk, making them dangerous places for people with vision difficulties. In this study, we propose a walking assistance method at a break in the sidewalk using images obtained by MY VISION (a Magic eYe of a Visually Impaired for Safety and Independent actiON) and deep learning. MY VISION is a system that analyzes videos obtained from a camera attached to the user's body and provides useful visual information, functioning as a virtual eye for visually impaired people. The proposed method provides a model for recognizing the sidewalk environment and a model for guiding a user to the center of the sidewalk in order to guide the visually impaired people safely. Experiments were conducted to verify the accuracy of each model, and the effectiveness of the proposed method was shown.
    Keywords— MY VISION, Visually Impaired People, Object Detection, Direction, Multi-Class Classification, EfficientNet-B4

  • High-Accuracy human motion recognition independent of motin direction using a single camera 査読有り 国際誌

    Jing Cao, Yui Tanjo

    International Journal of Innovative Computing, Information and Control ( ICIC International )   20 ( 4 )   1093 - 1103   2024年08月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.24507/ijicic.20.04.1093

    DOI: 10.24507/ijicic.20.04.1093

    その他リンク: http://www.ijicic.net/

  • A METHOD OF RECOGNIZING HUMAN WALK MOTION FROM MULTIPLE DIRECTIONS 査読有り 国際誌

    Miki Ooba, Yui Tanjo

    International Journal of Innovative Computing, Information and Control ( ICIC International )   20 ( 4 )   1245 - 1256   2024年08月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    It is an important issue for elderly people to maintain their walking ability
    in order to prevent falls and lead a healthy daily life. It may be of great help to develop
    a computer vision system which examines human daily walk posture and gives him/her
    some advices on the posture. This paper proposes a method of recognizing walk motions of
    a human focusing on his/her posture. In order to describe a human posture, we choose 39
    structural features defined from human joint coordinates obtained using OpenPose and 19
    figural features from human domain images and their different images. The feature vector
    containing these 58 features is used for recognizing a human walk motion by Random
    Forest. In the experiment, the method was applied to three walk motions of five persons
    each with eight walk directions and satisfactory results were obtained.
    Keywords: OpenPose, Image of human area, Human walk motion, Structural feature,
    Figural feature

    DOI: 10.24507/ijicic.20.04.1245

    DOI: 10.24507/ijicic.20.04.1245

    その他リンク: http://www.ijicic.net/

  • Analysis of Human Walking Posture Using a Wearable Camera 査読有り 国際誌

    Ziyue Liu, Joo Kooi Tan

    International Journal of Innovative Computing, Information and Control ( ICIC International )   19 ( 3 )   805 - 819   2023年05月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In recent years, human posture has received widespread attention in the field of health promotion, and walking is one of the most popular exercises for keeping daily health. With respect to the walking posture of a person especially of the elderly, small changes in the posture have a great impact on their spine and lower limbs, resulting in changes in balance function. According to this fact, it is necessary to check human walking posture periodically. For this purpose, a simple, automatic way of examining a human walking posture is expected. This paper proposes a method of analyzing human walking motion: Two kinds of indicators about human walking posture are proposed which estimate posture changes of a camera worn by a person through changing scenery observed from the camera. In this method, AKAZE is applied to images to detect feature points and find their correspondences. A 5-point algorithm is used to estimate epipolar geometric constraints and provides an essential matrix for relative camera motion. Changes in the relative motion of the camera are used to analyze the changes of human walking posture. In the experiment, the characteristics of four types of walking postures (one is normal and the others are biased) were analyzed using the human posture indicators, and the reliability of the proposed method was verified.

    DOI: 10.24507/ijicic.19.03.805

    Scopus

    CiNii Research

    その他リンク: http://www.ijicic.net/

  • Detecting Approaching Human Hands in a Human-Robot Coexistent Food Preparation Work Area for Preventing Collision 査読有り 国際誌

    Joo Kooi Tan, Takaaki Yotsumoto

    Journal of Robotics, Networking and Artificial Life   9 ( 3 )   289 - 294   2022年12月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In Japan, the population of working age between 15 and 64 years old peaked in 1995 at about 87 million and is expected to continue to decline in the future. Therefore, to solve the labor shortage, the introduction of industrial robots that can perform the same level of work as humans is strongly requested especially in the food preparation industry. In order to prevent danger to workers there, industrial robots must recognize workers and avoid them when there is fear of collision. In this paper, we propose a method for extracting hand regions based on the color distribution of a hand and GrabCut in an experimental environment to recognize human hands and detect their directions of approach. The proposed method was examined experimentally and gave satisfactory results.

    Kyutacar

  • A Method of Navigating a Visually Impaired Person Using MY VISION 査読有り 国際誌

    Joo Kooi Tan, Kohei Kitagawa

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   9 ( 1 )   25 - 30   2022年06月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper describes a method of navigating a visually impaired person who walks on a sidewalk and goes through a pedestrian crossing by the use of an ego-camera system named MY VISION. The proposed method finds a walkable region on the sidewalk. Thus, at a pedestrian crossing, it finds a traffic light and judges an appropriate timing to cross the road by analyzing the color and flashing status of the signal. The walkable region is found employing Graph- Based Segmentation (GBS), and once a pedestrian crossing is reached through the use of a GPS signal, a traffic light is detected by use of the HOG feature and Random Forest. The effectiveness of the proposed method was verified by experiments. This implies usefulness of a MY VISION system.

    Kyutacar

  • A Method of Identifying a Public Bus Route Number Employing MY VISION 査読有り 国際誌

    Joo Kooi Tan, Yosiki Hamasaki, Ye Zhou, Ishitobi Kazuma

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   8 ( 3 )   224 - 228   2021年12月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In response to requests from visually impaired people for better assistance tools in their daily lives and due to the difficulty of using public transportation, it is proposed a bus boarding support system using MY VISION. For the support, a method of identifying a bus route number is proposed in this paper. A bus approaching a bus stop is detected from MY VISION images that utilizes the Lucas-Kanade tracker. A bus route number area is then acquired with the help of the Haar-like filters and the random forest. Finally, the number from the area is extracted and identified through pattern matching. The effectiveness of the proposed method was shown by experiments. The results indicate that it is possible to realize a practical boarding support system for the visually impaired.

    DOI: 10.2991/jrnal.k.210922.014

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85120943766&origin=inward

  • Detection of a Fallen Person and its head and lower body from aerial images 査読有り 国際誌

    Joo Kooi Tan, Haruka Egawa

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   8 ( 2 )   134 - 138   2021年09月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper proposes a method of detecting a person fallen on the ground and its head and lower body from aerial images. The study intends to automate discovering victims of disasters such as earthquakes from areal images taken by a UAV. Rotation-invariant HOG and rotation-invariant LBP are employed as features describing a fallen person so as to detect it regardless of its body orientation. The proposed method also detects the head and the lower body of a fallen person using the peak of the gradient histogram. Experimental results show satisfactory performance of the proposed method.

    DOI: 10.2991/jrnal.k.210713.013

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85120707754&origin=inward

  • Estimation of Self-Posture of a Pedestrian Using MY VISION Based on Depth and Motion Network 査読有り 国際誌

    Joo Kooi Tan,Tomoyuki Kurosaki

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   7 ( 3 )   152 - 155   2020年12月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    A system is proposed that performs gait analysis of a pedestrian to improve a walk posture and at the same time to prevent fall. In the system, a user walks with a chest-mounted camera. His/her walking posture is estimated using a pair of images obtained from the camera. Normally it is difficult to estimate the camera movement, when the parallax of the image pair is small. Therefore, the system uses a convolutional neural network. Optical flow and camera movement, and depth images are estimated alternately. Satisfactory results were obtained experimentally.

    Keywords: Posture, Posture analysis, Pedestrian, MY VISION, Depth and Motion Network.

    DOI: 10.2991/jrnal.k.200909.002

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85097097973&origin=inward

  • Human-robot Cooperation Based on Visual Communication 査読有り 国際誌

    Joo Kooi Tan, Akitoshi Sato

    International Journal of Innovative Computing, Information and Control ( ICIC International 学会 )   16 ( 2 )   543 - 554   2020年04月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper proposes a human-robot cooperative system supported by visual communication. Human-robot cooperation is expected in various fields nowadays in order to raise efficiency, accuracy and safety of work, to name a few, production of goods, logistics, car driving, care of aged people, etc. In the human-robot cooperation, it is desirable that a human takes less part of work; whereas a robot takes a larger role with it. The idea of the present paper is that visual decision making is done by a human rather than a robot. The proposed system goes to a specified remote spot autonomously and performs objects acquisition there by communicating visually with a user. It aims at realizing ‘a shopping robot’ in near future and expected to be used by disadvantaged shoppers. The proposed system is presented and some experimental results are shown.

    DOI: 10.24507/ijicic.16.02.543

    Kyutacar

    Scopus

    CiNii Article

    CiNii Research

    その他リンク: https://www.scopus.com/record/display.uri?eid=2-s2.0-85082019101&origin=resultslist&sort=plf-f

  • A Method of Describing a Self-occlusive Motion - A Reverse Motion History Image 査読有り

    Joo Kooi Tan, Sayaka Okae, Youtaro Yamashita, Yuta Ono

    International Journal of Biomedical Soft Computing and Human Sciences   24 ( 1 )   1 - 7   2019年07月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Walk Environment Analysis Using MY VISION: Toward a Navigation System Providing Visual Assistance 査読有り 国際誌

    Joo Kooi Tan, Tomoki Ishimine and Shohei Arimasu

    International Journal of Innovative Computing, Information and Control   15 ( 3 )   861 - 871   2019年06月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Human motion recognition using TMRIs with extended HOOF 査読有り 国際誌

    Jing Cao, Youtaro Yamashita, Joo Kooi Tan

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   7 ( 4 )   231 - 235   2021年03月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In recent years, research on computer vision has shown great advancement and has been applied to a wide range of fields. Among them, automatic recognition of human motion is an important technology especially in crime prevention and elderly watching systems. Considering this trend, the paper proposes a novel method of human motion description and recognition employing a MHI (Motion History Image)-based method called TMRIs (Triplet Motion Representation Images) and an extended feature descriptor called HOOF (Histograms of Oriented Optical Flow) which contains information on the direction and velocity of movement. One of the advantages of the proposed method over existent methods is that it solves a self-occlusive motion problem particularly in the depth direction which occurs when a single camera is used. The performance and effectiveness of the proposed method are verified by experiments.

    Keywords: Human motion; Description; Recognition; Elderly care; crime prevention; MHI; Triplet Motion Representation Images; HOOF.

    DOI: 10.2991/jrnal.k.201215.004

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85103065635&origin=inward

  • Detecting Pedestrians and Their Walk Directions Using a MY VISION System 査読有り 国際誌

    Joo Kooi Tan, Hori Kenta, Seiji Ishikawa

    Journal of Robotics, Networking and Artificial Life ( ATLANTIS PRESS )   7 ( 2 )   137 - 141   2020年09月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper proposes a pedestrian detection method using a MY VISION system. The MY VISION system is an image processing system employing an ego-camera which a user of the system possesses, and it is expected to be the third eye of those who are aged, visually impaired or even those who are absorbed in a mobile-phone while walking. In the proposed method, a flow area different from camera movement is extracted first on the image provided from a user’s head-mounted camera. The area is a candidate area where pedestrians may exist. MSC-HOG features are then calculated to detect pedestrians in the area. HOF feature is further computed to recognize the moving direction of the detected pedestrians. The proposed method was examined its performance experimentally and satisfactory results were obtained.

    Keywords: MY VISION, Ego-cameras, Optical flow, MSC-HOG, HOF, Pedestrian detection, Walk direction.

    DOI: 10.2991/jrnal.k.200528.014

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85097079744&origin=inward

  • Development of an Autonomous Mobile Robot Doing Self-Position Estimation and Road Region Search 査読有り 国際誌

    Naokazu Iwata, JooKooi Tan

    Journal of Advances in Artificial Life Robotics   1 ( 2 )   99 - 103   2020年08月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper proposes an autonomous mobile robot expected to realize an intelligent robot that supports a human life.The proposed mobile robot has three main functions; self-position estimation, road region estimation, and route planning. Self-position estimation is performed by comparing local features obtained from the frontal images captured
    by the robot to a knowledge base. In road region estimation, the frontal image is separated into several regions to find the region where the robot can move. Furthermore, the route to the destination is planned by graph search.
    Experimental results show satisfactory performance of the proposed mobile robot.

    Kyutacar

  • Human Motion Recognition from Multiple Directions and Its Gait Cycles Analysis 査読有り 国際誌

    Miki Ooba, Yui Tanjo

    The 2024 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   959 - 963   2024年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2024年02月22日  -  2024年02月25日

    It is crucial for individuals to keep walking and stay healthy to prevent receiving nursing care. This paper proposes a method of recognizing walk motions and analyzing the gait cycle of a human focusing on his/her posture. We use 43 structural features defined from human joint coordinates obtained using OpenPose and 18 figural features from human domain images and their difference images. The feature vector containing these 61 features is used for the recognition of walk motion by Random Forest. In the experiment, we applied the method to recognizing six types of motions and analyzed the walk gait cycles of five persons, and obtained satisfactory results.

  • Human Behavior Segmentation and Recognition Using a Single-camera 査読有り 国際誌

    Jing Cao, Yui Tanjo

    The 2024 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   969 - 973   2024年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2024年02月22日  -  2024年02月25日

    In recent years, elderly people living alone account for a large proportion of the elderly population, and the issue of safety has also been a matter of great concern for the public. Considering the importance of monitoring the behavior and activities of the elderly and detecting abnormal movements, this paper proposes a method that can segment human behavior into each action and identify the action from the videos taken by a single camera. It uses features that can represent the shape of the human area in the depth direction, as well as the features such as motion direction and speed. The performance and effectiveness of the method are verified by experiments. Keywords: Behavior segmentation, Motion recognition, Optical flows, TMRIs, Ex-HOOF, MHI

  • A Method of Recognizing Body Movements Based on a Self-viewpoint Video 査読有り 国際誌

    Iichirou Moribe,Yui Tanjo

    The 2024 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   979 - 982   2024年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2024年02月22日  -  2024年02月25日

    The most critical human sensory function resides in vision. This paper focuses on utilizing visual information, specifically self-perspective footage, to identify individual movements. Existing researches require third-party filming to recognize human body movements and states. The proposed method, on the other hand, simply attaches a camera to the human head and enables the recognition of the subject's actions. Consequently, it becomes easier to monitor daily movements of a human and gather his/her data on body kinetics. This approach would be beneficial in scenarios involving individuals engaging in risky behavior or, during a certain emergency, providing valuable assistance. Keywords: My VISION, Posture estimation, Optical flow, HSV conversion

  • Supporting Safe Walk of a Visually Impaired Person at a Station Platform Based on MY VISION 査読有り 国際誌

    Shintaro Yamada, Yui Tanjo, Seiji Ishikawa

    The 2024 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   983 - 986   2024年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2024年02月22日  -  2024年02月25日

    When individuals with visual impairment go out, public transportation such as trains and buses is commonly used. However, many of them experience accidents, such as falling from train platforms or tripping due to unexpected contact with other passengers. To solve this problem, we propose a method using the MY VISION system which detects obstacles that may pose risks to individuals with visual impairment. The proposed method detects obstacles such as passengers’ pillars and platform edges at train stations. We employ an RGB-D camera for capturing frontal views of a user, use depth images to detect the edge of obstacles and level differences, and give warning to the visually impaired user based on the distance between him/her and the detected obstacle. Experimental results show satisfactory performance of the method.

    Keywords: Visually impaired, Safety, Train station platform, Obstacles detection, MY VISION

  • 3D Point Cloud Registration and Segmentation of Reflective Metal Objects Using Go-ICP and Improved RANSAC 査読有り 国際誌

    Kairi Morita, Ziyue Liu, Jing Cao, Joo Kooi Tan

    The 2023 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   880 - 885   2023年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2023年02月09日  -  2023年02月12日

    Registration and segmentation of 3D data are necessary in many fields, such as factory automation, automated driving, or even in the medical field. However, the technique is generally applied to non-metal objects. One of the problems of registration of a metal object is that the point clouds representing a metal object contain many outliers and missing points because of its reflective nature. This makes the accuracy of the registration and segmentation degrade. In this paper, we propose registration and segmentation techniques that are robust to outliers. For registration, we use the globally optimal Go-ICP (Global optimal - Iterative Closest Point) algorithm considering the goodness of a combination of point cloud sets to escape from convergence to a local solution. In segmentation, we address the problem of RANSAC generating false segments consisting of nearly identical multiple planar points and introduce an improved RANSAC. We use three kinds of the metal tray to show the effectiveness of the proposed technique.

  • Detection of Fallen Persons and Person Shadows from Drone Images 査読有り 国際誌

    Taisei Ono, Haruka Egawa, Yuta Ono, Seiji Ishikawa, Joo Kooi Tan

    The 2023 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   890 - 894   2023年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2023年02月09日  -  2023年02月12日

    In recent years, the development of automatic search methods based on aerial images taken by drones has been attracting attention in order to prevent secondary disasters and to perform rescue operations quickly in the search for victims of natural disasters. Although various methods exist for automatic person detection for search, they are based on the assumption that the background area of a person captured by a drone camera is a uniform ground in which only those persons who require rescue exist without any shadows or trees. In this paper, we propose a method of automatic detection of both fallen persons and person shadows, or trees on the ground. The method is combination of Ri-HOG and Ri-LBP features to search for fallen persons. These features are robust to rotation. We then employ GrabCut and brightness values to detect shadows. The effectiveness of the proposed method was verified by experiment.

    Kyutacar

  • Development of a Safe Walking Assistance System for Visually Impaired Persons Using MY VISION ― Estimation of a Safe Passage from Sidewalk Information Based on Transfer Learning of VGG-16 Network 査読有り 国際誌

    Takumi Yokote, Joo Kooi Tan

    The 2023 International Conference on Artificial Life and Robotics ( Robot Artificial Life Society )   886 - 889   2023年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2023年02月09日  -  2023年02月12日

    In recent years, the number of visually impaired persons has been increasing year by year, and outdoor accidents have also been increasing when they go out. It is difficult to detect hazards on sidewalks even with a currently popular technique, such as a semantic segmentation technique or YOLO, because sidewalk situations are complicated and change frequently. For this reason, we propose a method of recognizing sidewalk situations from a self-viewpoint video called MY VISION. Conventional methods detect objects surrounding the sidewalk by learning the objects’ features beforehand and guiding visually impaired persons according to the position/direction of the detected object. The proposed method neither learns objects nor detects objects. We focus on sidewalk situations and use a multi-class classification technique based on transfer learning of VGG-16 to guide visually impaired persons’ walk according to three kinds of sidewalk information to ensure more safety. The effectiveness of the proposed method was confirmed by experiments.

    Kyutacar

  • Recognition of Human Walking Motion Using a Wearable Camera 査読有り 国際誌

    Ziyue Liu, Tomoyuki Kurosaki, Joo Kooi Tan

    ACM Proceedings of the 2022 International Conference on Computing and Pattern Recognition ( Association for Computing Machinery(ACM) )   7 - 13   2022年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Beijing   2022年11月17日  -  2022年11月19日

  • Supporting Safe Walk of a Visually Impaired Person at a Railway Station Platform Based on MY VISION 査読有り 国際誌

    Yuki Kawaguchi, Seiji Ishikawa,Takashi Shinimiya Joo Kooi Tan

    Proc. of The 2022 International Conference on Artificial Life and Robotics (ICAROB2022) ( Artificial Life and Robotics (ICAROB) )   1 - 4   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Online   2022年01月21日  -  2022年01月23日

    For a visually impaired person, platforms of railway station are places where there is a risk of falling. This paper proposes a system that prevents fall of a visually impaired person from the platform by use of self-viewpoint images provided from MY VISION, an ego camera system attached to a user. The edges of the platform are detected by region segmentation and line segment detection on the self-viewpoint images. The performance of the proposed system was experimentally examined and satisfactory results were obtained.

    Kyutacar

  • Detecting a Pedestrian’s Walk Direction Using MY VISION for Supporting Safe Walk of a Visually Impaired Person 査読有り 国際誌

    Shinya Iizumi, Joo Kooi Tan, others

    Proceedings of the 2022 International Conference on Artificial Life and Robotics   1 - 4   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In this paper, we propose a method of recognizing multiple objects using MSC-HOG (Multiple-Scale-Cell Histograms of Oriented Gradients) features and intensity models of both pedestrians and bicyclists. We also propose a method of detecting approaching passersby using different discriminators without using time-series information such as Optical Flow. The effectiveness of the proposed method is verified by experiments.

  • A Method of Navigating a Visually Impaired Person Using MY VISION 査読有り 国際誌

    Tan Joo Kooi, Kitagawa Kohei

    Journal of Robotics, Networking and Artificial Life ( 株式会社 ALife Robotics )   9 ( 1 )   25 - 30   2022年01月

     詳細を見る

    担当区分:筆頭著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper describes a method of navigating a visually impaired person who walks on a sidewalk and goes through a pedestrian crossing by the use of an ego-camera system named MY VISION. The proposed method finds a walkable region on the sidewalk. Thus, at a pedestrian crossing, it finds a traffic light and judges an appropriate timing to cross the road by analyzing the color and flashing status of the signal. The walkable region is found employing Graph- Based Segmentation (GBS), and once a pedestrian crossing is reached through the use of a GPS signal, a traffic light is detected by use of the HOG feature and Random Forest. The effectiveness of the proposed method was verified by experiments. This implies usefulness of a MY VISION system

    DOI: 10.57417/jrnal.9.1_25

    Scopus

    CiNii Research

    その他リンク: https://kyutech.repo.nii.ac.jp/records/7705

  • 3-D recovery of a non-rigid object from a single camera view by piecewise recovery and synthesis 査読有り

    Ishikawa S., Tan J. K., Kim H., Ishikawa S.

    Proc. of the 21st Int. Conf. on Pattern Recognition (ICPR2012)   1443 - 1446   2012年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tsukuba   2012年11月11日  -  2012年11月15日

  • Obstacles extraction using a moving camera 査読有り

    Qian S., Tan J. K., Kim H., Ishikawa S.

    Proc. of Asian Conf. on Computer Vision (ACCV2012)   USB:1 - USB:12   USB: 12Pages   2012年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Daejeon   2012年11月05日  -  2012年11月09日

    DOI: 10.1007/978-3-642-37484-5_36

    Scopus

  • Motion history image: Its variants and applications 査読有り

    Ahad, M.A.R, Tan, J. K., Kim, H., Ishikawa S

    Machine Vision and Applications, SpringerLink   23 ( 2 )   255 - 281   2012年03月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s00138-010-0298-4

    Scopus

  • Moving objects segmentation at a traffic junction from vehicular vision 査読有り 国際誌

    Joo Kooi Tan,Seiji Ishikawa,Shin-ichiro Sonoda,Makoto Miyoshi,Takashi Morie

    ECTI Transactions on Computer and Information Technology   5 ( 2 )   73 - 88   2011年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Development of a Safe Walking Assistance System for Visually Impaired Persons Using MY VISION ― Estimation of a Safe Passage from Sidewalk Information Based on Transfer Learning of VGG-16 Network 査読有り

    Yokote T., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   894 - 898   2023年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, the number of visually impaired persons has been increasing year by year, and outdoor accidents have also been increasing when they go out. It is difficult to detect hazards on sidewalks even with a currently popular technique, such as a semantic segmentation technique or YOLO, because sidewalk situations are complicated and change frequently. For this reason, we propose a method of recognizing sidewalk situations from a self-viewpoint video called MY VISION. Conventional methods detect objects surrounding the sidewalk by learning the objects’ features beforehand and guiding visually impaired persons according to the position/direction of the detected object. The proposed method neither learns objects nor detects objects. We focus on sidewalk situations and use a multi-class classification technique based on transfer learning of VGG-16 to guide visually impaired persons’ walk according to three kinds of sidewalk information to ensure more safety. The effectiveness of the proposed method was confirmed by experiments.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85149473680&origin=inward

  • Detection of Fallen Persons and Person Shadows from Drone Images 査読有り

    Ono T., Egawa H., Ono Y., Ishikawa S., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   899 - 903   2023年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, the development of automatic search methods based on aerial images taken by drones has been attracting attention in order to prevent secondary disasters and to perform rescue operations quickly in the search for victims of natural disasters. Although various methods exist for automatic person detection for search, they are based on the assumption that the background area of a person captured by a drone camera is a uniform ground in which only those persons who require rescue exist without any shadows or trees. In this paper, we propose a method of automatic detection of both fallen persons and person shadows, or trees on the ground. The method is combination of Ri-HOG and Ri-LBP features to search for fallen persons. These features are robust to rotation. We then employ GrabCut and brightness values to detect shadows. The effectiveness of the proposed method was verified by experiment.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85149470347&origin=inward

  • 3D Point Cloud Registration and Segmentation of Reflective Metal Objects Using Go-ICP and Improved RANSAC 査読有り 国際誌

    Morita K., Liu Z., Cao J., Ishikawa S., Nitta M., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   888 - 893   2023年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Registration and segmentation of 3D data are necessary in many fields, such as factory automation, automated driving, or even in the medical field. However, the technique is generally applied to non-metal objects. One of the problems of registration of a metal object is that the point clouds representing a metal object contain many outliers and missing points because of its reflective nature. This makes the accuracy of the registration and segmentation degrade. In this paper, we propose registration and segmentation techniques that are robust to outliers. For registration, we use the globally optimal Go-ICP (Global optimal-Iterative Closest Point) algorithm considering the goodness of a combination of point cloud sets to escape from convergence to a local solution. In segmentation, we address the problem of RANSAC generating false segments consisting of nearly identical multiple planar points and introduce an improved RANSAC. We use three kinds of the metal tray to show the effectiveness of the proposed technique.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85149454546&origin=inward

  • Recognition of Human Walking Motion Using a Wearable Camera 査読有り

    Liu Z., Kurosaki T., Tan J.K.

    ACM International Conference Proceeding Series   7 - 13   2022年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, the computer vision technology has been attracting more attention than ever and being applied in a wide range of fields. Among them, the technology on automatic recognition of human motion is particularly important, since it leads to automatic detection of suspicious persons and automatic monitoring of elderly people. Therefore, the research on human motion recognition using computer vision techniques has been actively conducted in Japan and overseas. However, most of the conventional researches on human motion recognition employs a video of a human motion taken using an external fixed camera. There is no research on human motion recognition using a video of a surrounding scenery provided from a wearable camera. This paper proposes a method of recognizing a human motion by estimating the posture change of a wearable camera attached to a walking human from the motion of a scenery in the video provided from the wearable camera and by analyzing a human trunk change obtained from the posture change of the camera. In the method, AKAZE is applied to the images to detect feature points and to find their correspondence. The 5-point algorithm is used to estimate the Epipolar geometry constraint and an essential matrix which provides a camera relative motion. The change of the camera relative motion is then used to analyze the shape of a human trunk. The analyzed results, i.e., walking motion features, are finally fed into a SVM to identify the motion. In the experiment, five types of walking motions are captured by a wearable camera from five subjects. The accuracy on human motion recognition was 80%. More precise feature points extraction, more exact estimation of motions, and considering variety of human walking motions are needed to improve the proposed technique.

    DOI: 10.1145/3581807.3581809

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85160954407&origin=inward

  • Fruits and Vegetables Detection using the Improved YOLOv3 査読有り 国際誌

    Changhua Xu, Ziyue Liu, Joo Kooi Tan

    Proceedings of the 2022 International Conference on Artificial Life and Robotics   1 - 5   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan  

    As the global aging intensifies, it is more convenient for a robot to go for buying things like fruits and vegetables instead of elderly, and it is more human-like to select items according to a user’s personal preferences such as maturity of fruits, sweetness, etc. However, Fruits and vegetables are generally displayed in a disorderly manner. Therefore, detection and recognition of fruits and vegetables is a difficult task for a robot. This paper proposes an improved YOLOv3 and also pre-training the networks to detect fruits and vegetables, we then using Bilinear-CNN to classify fruit’s maturity. The effectiveness of the proposed method is shown by experiments.

  • Collision Avoidance in a Human-Robot Coexistent Food Preparation Environment Using Hands Area Extraction 査読有り

    Takaaaki Yotsumoto, Yuta Ono, Joo Kooi Tan

    Proc. of The 2022 International Conference on Artificial Life and Robotics (ICAROB2022) ( Artificial Life and Robotics (ICAROB) )   1 - 5   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Online   2022年01月21日  -  2022年01月23日

    In Japan, the population of the working-age between 15 and 64 years old is peaked in 1995 at about 87 million and is expected to continue to decline in the future. Therefore, in order to solve the labor shortage, the introduction of industrial robots that can perform the same level of work as humans is strongly requested.especially in a food preparation industry. In order to prevent danger to workers there, it is necessary for industrial robots to recognize workers and avoid them when there is fear of collision. In this paper, we propose a method of extracting hand regions based on the color distributions of a hand and GrabCut in an experimental environment to recognize human hands and detect their directions of approach. The performance of the proposed method is shown experimentally.

    Kyutacar

  • An Object Acquisition Based on Human-Robot Cooperation 査読有り 国際誌

    Kota Ito, Masuhiro Nitta, Seiji Ishikawa, Joo Kooi Tan

    Proc. of The 2022 International Conference on Artificial Life and Robotics (ICAROB2022) ( Artificial Life and Robotics (ICAROB) )   432 - 436   2022年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Online   2022年01月21日  -  2022年01月23日

    In this paper, we propose a human-robot cooperative system to support shopping refugees. In the system, a robot acquires an object specified by a person in a distant site. The normal vector is calculated from the depth image, and the region is segmented using GBS on an RGB image. The two obtained clues are used to accurately detect the position of the specified object. The effectiveness of the proposed method was verified by experiments.

    DOI: 10.5954/ICAROB.2022.OS21-1

    Kyutacar

  • Detecting Approaching Human Hands in a Human-Robot Coexistent Food Preparation Work Area for Preventing Collision 査読有り 国際誌

    Tan Joo Kooi, Yotsumoto Takaaki, Ono Yuta

    Journal of Robotics, Networking and Artificial Life ( 株式会社 ALife Robotics )   9 ( 3 )   289 - 294   2022年01月

     詳細を見る

    担当区分:筆頭著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    In Japan, the population of working age between 15 and 64 years old peaked in 1995 at about 87 million and is expected to continue to decline in the future. Therefore, to solve the labor shortage, the introduction of industrial robots that can perform the same level of work as humans is strongly requested especially in the food preparation industry. In order to prevent danger to workers there, industrial robots must recognize workers and avoid them when there is fear of collision. In this paper, we propose a method for extracting hand regions based on the color distribution of a hand and GrabCut in an experimental environment to recognize human hands and detect their directions of approach. The proposed method was examined experimentally and gave satisfactory results.

    DOI: 10.57417/jrnal.9.3_289

    Scopus

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85176549562&origin=inward

  • Supporting Safe Walk of a Visually Impaired Person at a Railway Station PlatformBased on MY VISION 査読有り

    Kawaguchi Y., Ishikawa S., Shinomiya T., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   447 - 450   2022年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    For a visually impaired person, platforms of railway station are places where there is a risk of falling. This paper proposes a system that prevents fall of a visually impaired person from the platform by use of self-viewpoint images provided from MY VISION, an ego camera system attached to a user. The edges of the platform are detected by region segmentation and line segment detection on the self-viewpoint images. The performance of the proposed system was experimentally examined and satisfactory results were obtained.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85125147415&origin=inward

  • Fruits and Vegetables Detection using the Improved YOLOv3 査読有り

    Xu C., Liu Z., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   456 - 460   2022年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    As the global aging intensifies, it is more convenient for a robot to go for buying things like fruits and vegetables instead of elderly, and it is more human-like to select items according to a user’s personal preferences such as.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85125135095&origin=inward

  • Detecting a Pedestrian’s Walk Direction Using MY VISION for Supporting Safe Walk of a Visually Impaired Person 査読有り

    Iizumi S., Tan J.K., Ono Y., Ishikawa S., Nitta M.

    Proceedings of International Conference on Artificial Life and Robotics   451 - 455   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a method of recognizing multiple objects using MSC-HOG (Multiple-Scale-Cell Histograms of Oriented Gradients) features and intensity models of both pedestrians and bicyclists. We also propose a method of detecting approaching passersby using different discriminators without using time-series information such as Optical Flow. The effectiveness of the proposed method is verified by experiments.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85125137759&origin=inward

  • Collision Avoidance in a Human-Robot Coexistent Food Preparation Environment Using Hands Area Extraction 査読有り

    Yotsumoto T., Ono Y., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   442 - 446   2022年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In Japan, the population of the working-age between 15 and 64 years old is peaked in 1995 at about 87 million and is expected to continue to decline in the future. Therefore, in order to solve the labor shortage, the introduction of industrial robots that can perform the same level of work as humans is strongly requested.especially in a food preparation industry. In order to prevent danger to workers there, it is necessary for industrial robots to recognize workers and avoid them when there is fear of collision. In this paper, we propose a method of extracting hand regions based on the color distributions of a hand and GrabCut in an experimental environment to recognize human hands and detect their directions of approach. The performance of the proposed method is shown experimentally.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85125137027&origin=inward

  • Detection of a Fallen Person and Estimation of the Head Position from UAV Images 査読有り

    Egawa Haruka, Seiji Ishikawa, Joo Kooi Tan

    Proc. of The 2021 International Conference on Artificial Life and Robotics (ICAROB2021) ( Artificial Life and Robotics (ICAROB) )   279 - 282   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Hiroshima(Online)   2021年01月22日  -  2021年01月23日

    Kyutacar

  • Development of a Pedestrian Crossing Navigation System for a Visually Impaired Person Using MY VISION 招待有り 査読有り

    Kouhei Kitagawa, Seiji Ishikawa, Joo Kooi Tan

    Proc. of The 2021 International Conference on Artificial Life and Robotics (ICAROB2021) ( Artificial Life and Robotics (ICAROB) )   283 - 286   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Hiroshima(Online)   2021年01月22日  -  2021年01月23日

    Kyutacar

  • Bus Line Number Detection Employing MY VISION 招待有り 査読有り

    Ye Zhou, Joo Kooi Tan

    Proc. of The 2021 International Conference on Artificial Life and Robotics (ICAROB2021) ( Artificial Life and Robotics (ICAROB) )   291 - 294   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Hiroshima(Online)   2021年01月22日  -  2021年01月23日

  • Abnormal Human Action Detection Based on GAN 招待有り 査読有り

    Tomoya Sano, Seiji Ishikawa, Joo Kooi Tan

    Proc. of The 2021 International Conference on Artificial Life and Robotics (ICAROB2021) ( Artificial Life and Robotics (ICAROB) )   287 - 290   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Hiroshima(Online)   2021年01月22日  -  2021年01月23日

    Kyutacar

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108852379&origin=inward

  • Development of a Pedestrian Crossing Navigation System for a Visually Impaired Person Using MY VISION 査読有り

    Kitagawa Kohei, Ishikawa Seiji, Tan Joo Kooi

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )   26   283 - 286   2021年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a system for a visually impaired person to cross a pedestrian crossing safely by the employment of the state of a traffic light and the remaining distance to the other side obtained from the analysis of MY VISION images. The traffic light at a pedestrian crossing is detected by a discriminator using HOG (Histograms of Oriented Gradients) features and Random Forest. The effectiveness of the proposed method was verified by experiments.

    DOI: 10.5954/icarob.2021.os14-2

    CiNii Research

  • Detection of a Fallen Person and Estimation of the Head Position from UAV Images 査読有り

    Egawa Haruka, Ishikawa Seiji, Tan Joo Kooi

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )   26   279 - 282   2021年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In order to search for victims in the event of a disaster, we propose a method of detecting a fallen person from UAV images and estimating its head position. Rotation-invariant HOG and Rotation-invariant LBP are used, so that they may detected a fallen person regardless of its body orientation. In addition, the position of the head of a fallen person is estimated using the peak of the gradient histogram. Experimental results show satisfactory performance of the proposed method.

    DOI: 10.5954/icarob.2021.os14-1

    CiNii Research

  • Bus Line Number Detection Employing MY VISION 査読有り 国際誌

    Zhou Ye, Hamasaki Yosiki, Tan Joo Kooi

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )   26   291 - 294   2021年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In response to the requests from visually impaired people that they wish better assistance tools in their daily activities, and to the facts that they find difficulty in using public transportation, we propose a bus boarding support system using MY VISION. For the support, we detect a bus approaching a bus stop from MY VISION images employing Haar-like filters, and acquire the moving object area with the pyramidal Lucas-Kanade method. Then, we extract the frontal upper area displaying bus line number and the destination by random forest. Having obtained the area, a method of connecting area marking is used to obtain the bus line number area. The effectiveness of the proposed method is shown by experiments.

    DOI: 10.5954/icarob.2021.os14-5

    CiNii Research

  • Abnormal Human Action Detection Based on GAN 査読有り 国際誌

    Sano Tomoya, Ishikawa Seiji, Tan Joo Kooi

    人工生命とロボットに関する国際会議予稿集 ( 株式会社ALife Robotics )   26   287 - 290   2021年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    One of the important roles of a camera surveillance system is to detect abnormal human actions or events. In this study, we propose a method of abnormal human actions/events detection method using Generative Adversarial Nets (GAN). In anomaly action detection, the main problem is that the image data of abnormal human actions is more difficult to obtain than normal human actions. To solve this difficulty, we use only normal human action data in the employed training network and those actions not recognized as normal are judged as abnormal. Experimental results show effectiveness of the proposed method.

    DOI: 10.5954/icarob.2021.os14-3

    CiNii Research

  • Abnormal human action detection based on gan 査読有り

    Sano T., Ishikawa S., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   2021   287 - 290   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    One of the important roles of a camera surveillance system is to detect abnormal human actions or events. In this study, we propose a method of abnormal human actions/events detection method using Generative Adversarial Nets (GAN). In anomaly action detection, the main problem is that the image data of abnormal human actions is more difficult to obtain than normal human actions. To solve this difficulty, we use only normal human action data in the employed training network and those actions not recognized as normal are judged as abnormal. Experimental results show effectiveness of the proposed method.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108852379&origin=inward

  • Detection of a fallen person and estimation of the head position from uav images 査読有り

    Egawa H., Ishiakwa S., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   2021   279 - 282   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In order to search for victims in the event of a disaster, we propose a method of detecting a fallen person from UAV images and estimating its head position. Rotation-invariant HOG and Rotation-invariant LBP are used, so that they may detected a fallen person regardless of its body orientation. In addition, the position of the head of a fallen person is estimated using the peak of the gradient histogram. Experimental results show satisfactory performance of the proposed method.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108801829&origin=inward

  • Bus line number detection employing my vision 査読有り

    Zhou Y., Hamasaki Y., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   2021   291 - 294   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In response to the requests from visually impaired people that they wish better assistance tools in their daily activities, and to the facts that they find difficulty in using public transportation, we propose a bus boarding support system using MY VISION. For the support, we detect a bus approaching a bus stop from MY VISION images employing Haar-like filters, and acquire the moving object area with the pyramidal Lucas-Kanade method. Then, we extract the frontal upper area displaying bus line number and the destination by random forest. Having obtained the area, a method of connecting area marking is used to obtain the bus line number area. The effectiveness of the proposed method is shown by experiments.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108842591&origin=inward

  • Development of a pedestrian crossing navigation system for a visually impaired person using my vision 査読有り

    Kitagawa K., Ishikawa S., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   2021   283 - 286   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a system for a visually impaired person to cross a pedestrian crossing safely by the employment of the state of a traffic light and the remaining distance to the other side obtained from the analysis of MY VISION images. The traffic light at a pedestrian crossing is detected by a discriminator using HOG (Histograms of Oriented Gradients) features and Random Forest. The effectiveness of the proposed method was verified by experiments.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108797473&origin=inward

  • 回転不変特徴量を用いたマルチコプター画像からの倒伏人物の検出

    江川 遥香, 石川 聖二, タン ジュークイ

    バイオメディカル・ファジィ・システム学会大会講演論文集 ( バイオメディカル・ファジィ・システム学会 )   33 ( 0 )   91 - 94   2020年10月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)

    <p><i>In recent years, aerial photography has been used to search for victims in the event of a disaster. Searching from the sky enables quick search activities in places that are difficult to enter. In this paper we propose a method of detecting a person fallen on the ground from images taken by a camera mounted on a UAV(multicopter). Unlike pedestrians, a fallen person takes various postures, and the orientation of the head in an image is not identical. Therefore, it is necessary to develop a method which is robust to various orientations of a fallen person. In the proposed method, Ri-HOG features and Ri-LBP features invariant to object orientation are employed for representing a fallen person, and the fallen person is detected by a classifier constructed using Random Forest. The effectiveness of the proposed method was verified by experiments</i><i>. </i></p>

    DOI: 10.24466/pacbfsa.33.0_91

    CiNii Article

    CiNii Research

  • Detection of Circulating Tumor Cells in Fluorescence Microscopy Images Based on ANN Classifier 査読有り

    Tsuji K., Lu H., Tan J.K., Kim H., Yoneda K., Tanaka F.

    Mobile Networks and Applications   25 ( 3 )   1042 - 1051   2020年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Circulating tumor cells (CTCs) is a clinical biomarker for cancer metastasis. CTCs are cells circulating in the body of patients by being separated from primary cancer and entering into blood vessel. CTCs spread every positions in the body, and this is one of the cause of cancer metastasis. To analyze them, pathologists get information about metastasis without invasive test. CTCs test is conducted by analyzing the blood sample from patient. The fluorescence microscope generates a large number of images per each sample, and images contain a lot of cells. There are only a few CTCs in images and cells often have blurry boundaries. So CTCs identification is not an easy work for pathologists. In this paper, we develop an automatic CTCs identification method in fluorescence microscopy images. This proposed method has three section. In the first approach, we conduct the cell segmentation in images by using filtering methods. Next, we compute feature values from each CTC candidate region. Finally, we identify CTCs using artificial neural network algorithm. We apply the proposed method to 5895 microscopy images (7 samplesas), and evaluate the effectiveness of our proposed method by using leave-one-out cross validation. We achieve the result of performance tests, a true positive rate is 92.57% and false positive rate is 9.156%.

    DOI: 10.1007/s11036-018-1121-0

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85053481889&origin=inward

  • Detection of Circulating Tumor Cells in Fluorescence Microscopy Images Based on ANN Classifier 査読有り 国際誌

    K. Tsuji, H. Lu, J. K. Tan, H. Kim, K. Yoneda, F. Tanaka

    Mobile Networks and Applications   25 ( 3 )   1042 - 1051   2020年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Ego-posture estimation for a pedestrian using a monocular wearable camera 査読有り

    Liu Z., Chihara T., Tan J.K.

    2020 5th International Conference on Computer and Communication Systems, ICCCS 2020   409 - 412   2020年05月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, the increase of aging population in developed countries has become a significant social problem. There are not a few accidents caused by their low walking ability. The improvement of the walking ability for the elderly and post recovery patients is the main concern of this paper. For this purpose, it is necessary to analyze and estimate their walking posture. In this paper, we propose a posture analysis and estimation method and a gait index acquisition method using a monocular wearable camera. The idea of this paper is to employ an ego-camera mounted on a user. His/her walking posture is analyzed by the frontal view video provided from the camera. This idea intends that the proposed system will be used on a daily basis in the future by those who want to improve their walking posture to keep a healthy life.

    DOI: 10.1109/ICCCS49078.2020.9118537

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85087496874&origin=inward

  • Detecting Pedestrians and Moving Directions by a MY VISION System 査読有り

    Kenta Hori, Seji Ishikawa, Joo Kooi Tan

    Proc. of The 2020 International Conference on Artificial Life and Robotics (ICAROB2020) ( Artificial Life and Robotics (ICAROB) )   775 - 778   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2020年01月13日  -  2020年01月16日

  • Development of an Environmentally Adaptable Autonomous Mobile Robot 査読有り

    Naokazu Iwata, Joo Kooi Tan

    Proc. of The 2020 International Conference on Artificial Life and Robotics (ICAROB2020) ( Artificial Life and Robotics (ICAROB) )   469 - 472   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2020年01月13日  -  2020年01月16日

  • Human motion recognition using TMRIS 査読有り

    Jing Cao, JooKooi Tan

    Proc. of The 2020 International Conference on Artificial Life and Robotics (ICAROB2020) ( Artificial Life and Robotics (ICAROB) )   779 - 782   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2020年01月13日  -  2020年01月16日

  • Estimation of Self-Posture of a Pedestrian Using MY VISION and Deep Learning 査読有り

    Tomoyuki Kurosaki, Joo Kooi Tan

    Proc. of The 2020 International Conference on Artificial Life and Robotics (ICAROB2020) ( Artificial Life and Robotics (ICAROB) )   485 - 489   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2020年01月13日  -  2020年01月16日

  • Supervoxel Graph Cuts: An Effective Method for GGO Candidate Regions Extraction on CT Images 査読有り

    Lu H., Kondo M., Li Y., Tan J., Kim H., Murakami S., Aoki T., Kido S.

    IEEE Consumer Electronics Magazine   9 ( 1 )   61 - 66   2020年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    In this article, a method to reduce artifacts on temporal difference images is introduced. The proposed method uses a nonrigid registration method for ground glass opacification (GGO), which is light in concentration and difficult to detect early. In this method, global matching, local matching, and three-dimensional (3D) elastic matching are performed on the current and previous images, and an initial temporal subtraction image is generated. After that, we use an Iris filter, which is the gradient vector concentration degree filter, to determine the initial GGO candidate regions and use supervoxel and graph cuts to segment region of interest in the 3D images. For each extracted region, a support vector machine is used to reduce the oversegmentation. The voxel matching is applied to generate the final temporal difference image, emphasizing the GGO regions while reducing the artifact.

    DOI: 10.1109/MCE.2019.2941468

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85076374234&origin=inward

  • Development of an environmentally adaptable autonomous mobile robot 査読有り

    Naokazu I., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics   2020   469 - 472   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    To realize a service-robot system for supporting a human life, we propose techniques for estimating self-position of a robot using local features of images, segmenting an image for finding a movable area, and also planning a route for finding the destination. In the route planning, junctions are labelled numbers for the robot to travel in order. The developed mobile robot travels to the destination employing the information on the estimated road region, its self-position and the planned route.

    DOI: 10.5954/ICAROB.2020.GS5-2

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108847330&origin=inward

  • Supervoxel Graph Cuts: An Effective Method for GGO Candidate Regions Extraction on CT Images 査読有り 国際誌

    H. Lu, M. Kondo, Y. Li, J. K. Tan, H. Kim, S. Murakami, T. Aoki, S. Kido

    IEEE Consumer Electronics Magazine   9 ( 1 )   61 - 66   2020年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Detecting pedestrians and their moving directions by a my vision system 査読有り

    Hori K., Ishikawa S., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics ( 株式会社ALife Robotics )   2020   775 - 778   2020年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a pedestrian detection method using a MY VISION system, which is an image processing system employing an ego-camera. First, a flow area different from the camera movement is extracted on the images from a user’s head-mounted camera. The area is a candidate area where pedestrians may exist. Second, MSC-HOG features are calculated to detect pedestrians. HOF feature is then computed to recognize the moving direction of the detected pedestrians. Experimental results show effectiveness of the proposed method.

    DOI: 10.5954/ICAROB.2020.GS3-1

    Scopus

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108810327&origin=inward

  • Human motion recognition using tmris 査読有り

    Jing C., Yamashita Y., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics ( 株式会社ALife Robotics )   2020   779 - 782   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    With the aggravation of the aging trend in Japan, the number of elderly people living alone has gradually increased, and the development of elderly people care system has begun to receive attention. Considering this trend, the paper proposes a MHI(Motion History Image)-based method called TMRIs (Triplet Motion Representation Images) that solve in recognition, a self-occlusive motion problem particularly in the depth direction using a single camera. The performance and effectiveness of the method are verified by experiments.

    DOI: 10.5954/ICAROB.2020.GS3-2

    Scopus

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108809693&origin=inward

  • Estimation of self-posture of a pedestrian using my vision and deep learning 査読有り

    Kurosaki T., Tan J.K.

    Proceedings of International Conference on Artificial Life and Robotics ( 株式会社ALife Robotics )   2020   485 - 489   2020年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    A system is proposed that performs gait analysis of a pedestrian to prevent fall. In the system, a user walks with a chest-mounted camera. His/her walking posture is estimated using a pair of images obtained from the camera. Normally it is difficult to estimate the camera movement, when the parallax of the image pair is small. Therefore, the system uses a convolutional neural network. Optical flow and camera movement, and depth images are estimated alternately. Satisfactory results were obtained experimentally.

    DOI: 10.5954/ICAROB.2020.GS1-2

    Scopus

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85108785176&origin=inward

  • 物流の自動化のためのエピポーラ幾何を用いた貨物の位置計測

    福田 和樹, 前岡 拓樹, タン ジュークイ

    バイオメディカル・ファジィ・システム学会大会講演論文集 ( バイオメディカル・ファジィ・システム学会 )   32 ( 0 )   B1-3   2019年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)

    <p><i>In recent years, the labor shortage has become a serious problem in logistics industries. In particular, among the five major functions of logistics, most of the storage and cargo handling operations are performed manually using forklifts. This problem can be solved by automating the process. In this paper, we propose a method to acquire the position of cargo from the images taken by two fixed cameras in a warehouse and the relative distance information between a transport vehicle and the cargo using the camera attached to the vehicle. The effectiveness of the proposed methods is shown by experiments.</i><i> </i></p>

    DOI: 10.24466/pacbfsa.32.0_b1-3

    CiNii Article

    CiNii Research

  • Saliency Detection using the Combination of Boundary Aware Color-map and Seam-map 査読有り

    Aminul Islam, Sk. Md. Masudul Ahsan, Joo Kooi Tan

    IEEE Proceedings of International Conference on Computer Communication Chemical Materials and Electronic Engineering(IC4ME2-2019)2019   1 - 4   2019年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Rajshahi   2019年07月11日  -  2019年07月12日

  • Saliency Detection using the Combination of Boundary Aware Color-map and Seam-map 査読有り

    Islam A., Ahsan S.M.M., Tan J.K.

    5th International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering, IC4ME2 2019   2019年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Salient region of an image is usually detected by using contrast and boundary priors. Along with those cues the use of seam importance map has shown promising output previously. In this study, better result is found by further exploiting the seam-map using spatial distance and color information in combination with boundary prior. Color and seam maps are also down-weighted using average spatial distance to other regions. Moreover, passing the superpixelized version of the input image into seam and color map generation procedure has improved the output. Experimental results based on MSRA 1k dataset are presented with ten state of the art methods. F-beta measures are presented along with precision recall curves to better understand the outcome. The performance comparison with compared researches proofs superiority of the proposed method.

    DOI: 10.1109/IC4ME247184.2019.9036637

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85082984951&origin=inward

  • A deep unified framework for suspicious action recognition 査読有り 国際誌

    Ilidrissi A., Tan J.

    Artificial Life and Robotics   24 ( 2 )   219 - 224   2019年06月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    As action recognition undergoes change as a field under influence of the recent deep learning trend, and while research in areas such as background subtraction, object segmentation and action classification is steadily progressing, experiments devoted to evaluate a combination of the aforementioned fields, be it from a speed or a performance perspective, are far and few between. In this paper, we propose a deep, unified framework targeted towards suspicious action recognition that takes advantage of recent discoveries, fully leverages the power of convolutional neural networks and strikes a balance between speed and accuracy not accounted for in most research. We carry out performance evaluation on the KTH dataset and attain a 95.4% accuracy in 200 ms computational time, which compares favorably to other state-of-the-art methods. We also apply our framework to a video surveillance dataset and obtain 91.9% accuracy for suspicious actions in 205 ms computational time.

    DOI: 10.1007/s10015-018-0518-y

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85071613991&origin=inward

  • Walk environment analysis using my vision: Toward a navigation system providing visual assistance 査読有り 国際誌

    Tan J.K., Ishimine T., Arimasu S.

    International Journal of Innovative Computing, Information and Control ( ICIC International 学会 )   15 ( 3 )   861 - 871   2019年06月

     詳細を見る

    担当区分:筆頭著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    This paper proposes a method of analyzing a human walk environment using MY VISION. MY VISION is an ego-camera and a computer system which analyzes a video obtained from the ego-camera to acquire certain visual information useful for hu- man daily activities. The system is expected to be a virtual eye of a visually impaired person or the third eye of a pedestrian absorbed in a mobile phone. The proposed method keeps in a database the background images of key points along a sidewalk and judges if a MY VISION user is walking along the sidewalk or if he/she has come to a crosswalk by referring to the backgrounds. If the former, the method finds a safe road region on the sidewalk, whereas, if the latter, it searches for the crosswalk for finding an appropriate walk direction and a traffic light to know the proper timing to cross it. Experimental re- sults show effectiveness of the proposed method.

    DOI: 10.24507/ijicic.15.03.861

    Scopus

    CiNii Article

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85067556972&origin=inward

  • Saliency Detection using Boundary Aware Regional Contrast Based Seam-map 査読有り

    Islam A., Ahsan S.M.M., Tan J.K.

    2018 International Conference on Innovation in Engineering and Technology, ICIET 2018   2019年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Most of the saliency detection methods use the contrast and boundary prior to extract the salient region of an input image. These two approaches are followed in Boundary Aware Regional Contrast Based Visual Saliency Detection (BARC) [1] along with spatial distance information to achieve state of the art result. In this research, a more interesting cue is introduced to extract the salient region from an input image. Here, a combination of seam map and BARC [1] is presented to produce the saliency output. Seam importance map with boundary prior is also presented to measure the performance of this combination. Experiments with ten state of the art methods reveal that we get better saliency output by combining seam information of an input image with BARC [1].

    DOI: 10.1109/CIET.2018.8660825

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85063582435&origin=inward

  • Recognition of surrounding environment from electric wheelchair videos based on modified YOLOv2 査読有り

    Sakai, Y., H.,Lu, Joo Kooi Tan, Hyoungseop Kim

    Future Generation Computer Systems   92 ( 3 )   157 - 161   2019年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Development of a Human-Robot Cooperative System Based on Visual Information 査読有り

    Akitoshi Sato, Joo Kooi Tan, Yuta Ono

    Proceedings of International Workshop on Advanced Image Technology(IWAIT2019)   1 - 6   2019年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2019年01月06日  -  2019年01月09日

  • Finding a rush-out human employing a human body direction detector 査読有り

    Yuta Ono, Joo Kooi Tan, Akitoshi Sato

    Proceedings of International workshop on Advance Image Technology (IWAIT 2019)   1 - 6   2019年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2019年01月06日  -  2019年01月09日

    Kyutacar

  • Employment of Uncalibrated Cameras for 3-D Shape Recovery

    Tan Joo Kooi, Ishikawa Seiji

    バイオメディカル・ファジィ・システム学会大会講演論文集 ( バイオメディカル・ファジィ・システム学会 )   32 ( 0 )   B1 - 1   2019年01月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(研究会,シンポジウム資料等)

    <p><i>A method of 3-D shape recovery employing uncalibrated cameras is described. Three-dimensional shape recovery has long been studied to date and has various application fields. An established method for 3-D shape recovery is based on stereo cameras which necessitates camera calibration in advance. In this paper, a 3-D shape recovery method by uncalibrated cameras is presented, which ranges from the recovery of partial shape of an object to its entire shape. One of the main advantages of the present method is that the shape recovery is not restricted to a video capture site such as a studio, if only videos are able to be taken by cameras.</i><i> </i></p>

    DOI: 10.24466/pacbfsa.32.0_B1-1

    CiNii Article

    CiNii Research

    その他リンク: https://ci.nii.ac.jp/naid/130007980073

  • A Method of Describing a Self-occlusive Motion - A Reverse Motion History Image 査読有り

    TAN Joo Kooi, OKAE Sayaka, YAMASHITA Youtaro, ONO Yuta

    International Journal of Biomedical Soft Computing and Human Sciences: the official journal of the Biomedical Fuzzy Systems Association ( バイオメディカル・ファジィ・システム学会 )   24 ( 1 )   1 - 7   2019年01月

     詳細を見る

    担当区分:筆頭著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    <p><i>This paper proposes a new method of describing a self-occlusive human motion, particularly in the depth direction, which has been considered little in the motion/action recognition studies to date in spite of its importance in our daily life. A Motion History Image (MHI) is a well-known method of describing a motion by a single gray value image, but it suffers from a self-occlusion problem in which present motion overwrites past motion. To solve this difficulty, a Reverse description MHI</i>(RMHI) is proposed in the paper. RMHI and the original MHI are both employed for motion representation in the proposed method; the former for approach motion, whereas the latter for leave motion. In the experiment on motion recognition, motions are described by RMHI or MHI according to motion direction, transformed then to Hu moment vectors, and finally recognized employing the k-nearest neighbor. Experimental results show effectiveness of the RMHI description.</p>

    DOI: 10.24466/ijbschs.24.1_1

    CiNii Article

    CiNii Research

    その他リンク: https://ci.nii.ac.jp/naid/130007998971

  • Development of a human-robot cooperative system based on visual information 査読有り

    Sato A., Tan J.K., Ono Y.

    Proceedings of SPIE - The International Society for Optical Engineering   11049   2019年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, along with the aging of society worldwide, decrease of working population has become a serious problem. For this reason, robots are expected to substitute human work, automate distribution, and support human daily life, especially, elderly care assistance. In this research, we focus on the support of those who need care in everyday life, and, in this paper, we propose a human-robot cooperative system that supports acquisition of objects in cooperation with a human. The outline of the action of this robot is as follows: (i) It moves to the location designated by a user autonomously (not by remote control): (ii) On arrival, the robot exchanges information by video with the user who is at a remote place, and acquires the objects designated by the user among those placed there. (iii) After the acquisition, the robot moves again autonomously to the user and hands over the objects to the user. In this paper, we focus on step (ii) and show methods and some experimental results.

    DOI: 10.1117/12.2522001

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85063913144&origin=inward

  • Saliency Detection using Boundary Aware Regional Contrast Based Seam-map 査読有り

    Aminul Islam, Sk. Md. Masudul Ahsan, JooKooi Tan

    IEEE Proceedings of International Conference on Innovation in Engineering and Technology(ICIET)2018   1 - 5   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Dhaka   2018年12月27日  -  2018年12月29日

    Kyutacar

  • Registration of phalange region from CR images based on genetic algorithm 査読有り

    Kawagoe K., Murakami S., Lu H., Tan J., Kim H., Aoki T.

    International Conference on Control, Automation and Systems   2018-October   1464 - 1467   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In Japan, the number of patients with osteoporosis and rheumatoid arthritis is increasing. Image diagnosis using CR images is effective for osteoporosis and rheumatoid arthritis. Development of a CAD system is important for reducing burdens on doctors. In this paper, we propose an automatic registration algorithm in the CAD system. In the proposed method, the genetic algorithm is used to register bone regions between identical parts of the same subject with different time series. In the experiment, the proposed method is applied to 176 bone area, and 98.14 % of TPR, 1.85 % of FPR are obtained respectively. Even when the area difference is used as the fitness of the genetic algorithm, it has cross-correlation and positioning accuracy equivalent to mutual information.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060451078&origin=inward

  • Detection of phalange region based on U-Net 査読有り

    Hatano K., Murakami S., Lu H., Tan J., Kim H., Aoki T.

    International Conference on Control, Automation and Systems   2018-October   1338 - 1342   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Osteoporosis is one of the famous bone diseases. It is a major cause of deteriorating the quality of life, and early detection and early treatment are becoming socially important. Visual screening using Computed Radiography (CR) images is effective for diagnosis of osteoporosis, but there are problems of increasing the burden on doctors, variation in diagnostic results due to differences in experiences of doctors, and undetected lesions. In order to solve this problem, we are working on a computer-aided diagnosis (CAD) system for osteoporosis. In this paper, we propose segmentation methods of the phalange region from the phalangeal CR images as a preprocessing of classification of osteoporosis. In the proposed method, we construct a segmentation model using U-Net, which is a type of deep convolution neural network (DCNN). The proposed method was applied to input images generated from CR images of 101 patients with both hands, and evaluated using the Intersection over Union (IoU) values. The result was 0.914 in IoU.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060468143&origin=inward

  • Swallowing motion analyzing from dental MR imaging based on AKAZE and particle filter algorithm 査読有り

    Suetani K., Lu H., Tan J., Kim H., Tanaka T., Kitou S., Morimoto Y.

    International Conference on Control, Automation and Systems   2018-October   1343 - 1346   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In recent years, dysphagia is problem among elderly people. Therefore, it is necessary to accurately evaluate swallowing function in order to prevent swallowing disorder beforehand or to detect it early. And it is considered that evaluation of swallowing function using Magnetic Resonance Imaging (MRI) is useful. In order to accurately analyzing of the swallowing motion using a computer aided diagnosis (CAD) system on MR imaging, automatic extraction of the esophagus region, which is a region of interest by the image analysis method, is required. Extraction of the spinal region is required as a preliminary step of the esophagus region extraction. Therefore, in this paper, we develop an analysis method of swallowing movement by three steps of extraction of spinal region, extraction of esophageal region, and analysis of swallowing movement. As an analytical method of swallowing movement, we emphasize the liquid part at the time of swallowing movement using an emphasis map, then follow the liquid tracing by using the AKAZE feature quantity and the particle filter algorithm, and analyze the swallowing motion.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060441189&origin=inward

  • Extraction of median plane from facial 3D point cloud based on symmetry analysis using ICP algorithm 査読有り

    Yamada S., Lu H., Tan J.K., Kim H., Kimura N., Okawachi T., Nozoe E., Nakamura N.

    International Conference on Control, Automation and Systems   2018-October   1347 - 1350   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Cleft lip is a kind of congenital facial morphological abnormality. In the clinical field of cleft lip, it is necessary to analyze symmetric shape. However, there is no method to analyze the cleft lip technique based on symmetrical viewpoints. On the other hand, in our previous method to find a symmetric axis using a 2D image, since the middle line is extracted only from the front view of the face moire image. There was a problem that low accuracy was obtained by slight rotation of the face and it was not possible to consider 3D information. In this paper, we propose a method to extract the median plane of the face by analyzing based on bilateral symmetry by using 3D point cloud on the face of front. By extracting the median plane, we believe that not only surgical assistance of doctor be possible but also become a clue to development of simulation software which is the end goal.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060498535&origin=inward

  • Detection of phalange region based on U-Net 査読有り

    Hatano K., Murakami S., Lu H., Tan J.K., Kim H., Aoki T.

    International Conference on Control, Automation and Systems   1338 - 1342   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   PyeongChang   2018年10月17日  -  2018年10月20日

  • Swallowing motion analyzing from dental MR imaging based on AKAZE and particle filter algorithm 査読有り

    Suetani K., Lu H., Tan J.K., Kim H., Tanaka T., Kitou S., Morimoto Y.

    International Conference on Control, Automation and Systems   1343 - 1346   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   PyeongChang   2018年10月17日  -  2018年10月20日

  • Extraction of median plane from facial 3D point cloud based on symmetry analysis using ICP algorithm 査読有り

    Yamada S., Lu H., Tan J.K., Kim H., Kimura N., Okawachi T., Nozoe E., Nakamura N.

    International Conference on Control, Automation and Systems   1347 - 1350   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   PyeongChang   2018年10月17日  -  2018年10月20日

  • Extraction of GGO Candidate Regions on Thoracic CT Images using SuperVoxel-Based Graph Cuts for Healthcare Systems 査読有り

    H. Lu, M. Kondo, Y. Li, J. K. Tan, H. Kim, S. Murakami, T. Aoki, S. Kido

    Mobile Networks and Applications   23 ( 6 )   1169 - 1679   2018年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    In this paper, we propose a method to reduce artifacts on temporal difference images by improving the conventional method using a non-rigid registration method for ground glass opacification (GGO), which is light in concentration and difficult to detect early. In this method, global matching, local matching, and 3D elastic matching are performed on the current image and past image, and an initial temporal difference image is generated. After that, we use an Iris filter, which is the gradient vector concentration degree filter, to determine the initial GGO candidate regions and perform segmentation using SuperVoxel and Graph Cuts in which a superpixel is extended to three dimensions for each region of interest. For each extracted region, a support vector machine (SVM) is used to reduce the over-segmentation. Finally, in the method that greatly reduces artifacts other than the remaining GGO candidate regions, Voxel Matching is applied to generate the final temporal difference image, emphasizing the GGO regions while reducing the artifact. The resulting ratio of artifacts to lung volume is 0.101 with an FWHM of 28.3, which is an improvement over the conventional method and shows the proposed method’s effectiveness.

    DOI: 10.1007/s11036-018-1111-2

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85053445997&origin=inward

  • Segmentation of Spinal Canal Region in CT Images using 3D Region Growing Technique 査読有り

    Fu G., Lu H., Tan J.K., Kim H., Zhu X., Lu J.

    2018 International Conference on Information and Communication Technology Robotics, ICT-ROBOT 2018   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Tumor of spinal cord (Spinal Cord Neoplasms (SCN)) is considered as one of the life threatening diseases that causes death. Early detection of the SCN plays an important role in the management of the lesions. To analyze the treatment, it is necessary to segment the spinal canal based on accurate three-dimensional image processing technique. This paper presents a segmentation algorithm based on 3D region growing for extracting spinal canal from CT images with high accuracy. Intersection over union (IoU) is used to compare the results of segmentation with the manual segmentation results. In the experiment, the proposed method was tested on 3373 CT slices of 10 patients. The proposed method has an average accuracy of 0.7732 and a variance of 0.0061. Satisfactory results have been achieved rapidly, which demonstrates the effectiveness and superiority of the proposed method.

    DOI: 10.1109/ICT-ROBOT.2018.8549913

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060055563&origin=inward

  • Object Detection on Video Images Based on R-FCN and GrowCut Algorithm 査読有り

    Mouri K., Lu H., Tan J.K., Kim H.

    2018 International Conference on Information and Communication Technology Robotics, ICT-ROBOT 2018   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Since the declining birthrate and the aging of society, there is concern about the labor shortage in Japan. There is a movement to compensate for the labor shortage by automation of factories by robots. Automation technique is wildly promoted in logistics industry, while there is few studies in objects picking. To solve this issue, we develop an image detection scheme for robotics picking from a video image. It is difficult to recognize and grasp different types of objects in robot vision field. Therefore, in the proposed method, object detection and object recognition method are proposed using Region-based Fully Convolutional Networks that is a type of object detection using deep learning. After detecting the object individually, final target object can select by applying the GrowCut algorithm. As a result, we achieve 0.6773 of the average precision and 0.6395 of Intersection over Union as the segmentation result respecively.

    DOI: 10.1109/ICT-ROBOT.2018.8549879

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85060010049&origin=inward

  • Segmentation of Spinal Canal Region in CT Images using 3D Region Growing Technique

    Fu G., Lu H., Tan J., Kim H., Zhu X., Lu J.

    2018 International Conference on Information and Communication Technology Robotics   1 - 4   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2018年09月06日  -  2018年09月08日

  • Object Detection on Video Images Based on R-FCN and GrowCut Algorithm

    Mouri K., Lu H., Tan J.K., Kim H.

    2018 International Conference on Information and Communication Technology Robotics   1 - 4   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2018年09月06日  -  2018年09月08日

  • Feature Transform Optimization for Pedestrian Classification 査読有り

    Nakashima Y., Tan J.

    2018 57th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2018   804 - 809   2018年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a FTOP (Feature Transform Optimization Problem) and its solution. We propose a method to optimize both parameters and processing order of feature transform simultaneously, not limited to convolution and pooling included in CNN (Convolutional Neural Network). In order to realize the optimization, we formulate it as a combinatorial optimization problem and solve it by meta-heuristics. The effectiveness of the proposed method is shown by applying the proposed method to pedestrian classification based on a benchmark data set.

    DOI: 10.23919/SICE.2018.8492633

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85056737437&origin=inward

  • Detection of circulating tumor cells in fluorescence microscopy Image based on ANN classifier 査読有り

    辻,陸 慧敏,タン ジュークイ,金 亨燮,田中

    Journal of Mobile Networks and Application   2018年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Registration of phalange region from CR Images based on genetic algorithm 査読有り

    Kawagoe K., Murakami S., Lu H., Tan J.K., Kim H., Aoki T.

    Proceedings of the 18th International Conference on Control, Automation and Systems   1464 - 1467   2018年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   PyeongChang   2018年10月17日  -  2018年10月20日

  • Feature Transform Optimization for Pedestrian Classification 査読有り

    中島 祐樹,タン ジュークイ

    Proc. of SICE Annual Conference 2018   804 - 809   2018年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    日本   奈良   2018年09月11日  -  2018年09月14日

    Kyutacar

  • Activity representation by SURF-based templates 査読有り

    Ahad M., Tan J., Kim H., Ishikawa S.

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization   6 ( 5 )   573 - 583   2018年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. This paper presents a method to recognise actions, which are overlapping and multi-dimensionalities. A spatio-temporal representation is illustrated on local interest points to compute global features. Motion history image (MHI) is computed and motion overwriting the motion overwriting problem of the MHI. The main contribution of this paper is that it demonstrates a higher discriminative ability of various complex actions when compared to the other MHI-based approaches. It selects local interest feature points to capture motion information using Speeded-Up Robust Features (SURF). These key interest points are exploited to compute gradient-based optical flow into four channels. RANSAC is exploited to remove outliers. It incorporates frame-subtracted accumulated image so that we can mask out points that are not required. Afterwards, feature vectors are computed based on moments. Actions are recognised by employing a nearest neighbour classifier and leave-one-out cross-validation partitioning scheme. The proposed method provides satisfactory recognition rates over several other approaches for some challenging actions in outdoor scenes.

    DOI: 10.1080/21681163.2017.1298472

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85016247850&origin=inward

  • Automatic identification of Bone Erosions in rheumatoid Arthritis from Hand Radiographs Based on Deep Convolution Neural Network 査読有り

    Seiichi Murakami, Kazuhiro Hatano, Joo Kooi Tan, Hyoungseop Kim, Takatoshi Aoki

    Multimedia Tools and Applications   77 ( 9 )   10921 - 10937   2018年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Environment recognition for electric wheelchair based on YOLOv2 査読有り

    Sakai Y., Lu H., Tan J.K., Kim H.

    ACM International Conference Proceeding Series   112 - 117   2018年08月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    At present, the aging population is growing in Japan. Along with that, the need for the utilization of welfare equipment is increasing. Electric wheelchair, a convenient transportation tool, is popularized rapidly. However, many accidents have occurred by using electric wheelchair, and the dangers for driving are pointed out. Therefore, it needs to improve accident factors, reduce accidents and improve the convenience of electric wheelchair by automation. Environmental recognition is the key technology for developing autonomous electric wheelchair. Environmental recognition includes self-position estimation, recognition of sidewalks, crosswalks, traffic lights, and moving object prediction, etc. In order to solve these problems, this paper describes a system for the detection of sidewalks, crosswalks and traffic lights. We develop the object recognition methods using a modified YOLOv2 that is one of object detection algorithms applying convolutional neural networks (CNN). We detect the object through YOLOv2 and perform processing such as unnecessary bounding box deletion and interpolation. The experimental results demonstrate that the area under the curve (AUC) of the detection rate is 0.620.

    DOI: 10.1145/3278229.3278231

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85058546828&origin=inward

  • Environment recognition for electric wheelchair based on YOLOv2 査読有り

    Y. Sakai, H.,Lu, Joo Kooi Tan, Hyoungseop Kim

    The 3rd International Conference on Biomedical Signal and Image Processing, ICBIP 2018   112 - 117   2018年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2018年08月22日  -  2018年08月24日

  • 物体識別における特徴量変換構造の最適化 査読有り

    中島祐樹,タンジュークイ

    バイオメディカル・ファジィ・システム学会誌   20 ( 1 )   1 - 13   2018年05月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Automatic identification of bone erosions in rheumatoid arthritis from hand radiographs based on deep convolutional neural network 査読有り

    Murakami S., Hatano K., Tan J., Kim H., Aoki T.

    Multimedia Tools and Applications   77 ( 9 )   10921 - 10937   2018年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Although radiographic assessment of joint damage is essential in characterizing disease progression and prognosis in patients with rheumatoid arthritis (RA), it is often difficult even for trained radiologists to find radiographic changes on hand and foot radiographs because lesion changes are often subtle. This paper proposes a novel quantitative method for automatically detecting bone erosion on hand radiographs to assist radiologists. First, the proposed method performs with the crude segmentation of phalanges regions from hand radiograph and extracts the detailed phalanges regions by the multiscale gradient vector flow (MSGVF) Snakes method. Subsequently, the region of interest (ROI; 40 × 40 pixels) is automatically set on the contour line of the segmented phalanges by the MSVGF algorithm. Finally, these selected ROIs are identified by the presence or absence of bone erosion using a deep convolutional neural network classifier. This proposed method is applied to the hand radiographs of 30 cases with RA. The true-positive rate and the false-positive rate of the proposed method are 80.5% and 0.84%, respectively. The number of false-positive ROIs is 3.3 per case. We believe that the proposed method is useful for supporting radiologists in imaging diagnosis of RA.

    DOI: 10.1007/s11042-017-5449-4

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85056120147&origin=inward

  • Detection of a specific moving object from head-mounted camera images 査読有り

    Ishitobi K., Tan J.K., Kim H., Ishikawa S.

    SII 2017 - 2017 IEEE/SICE International Symposium on System Integration   2018-January   817 - 822   2018年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 IEEE. In this paper, a method is proposed for detecting and tracking a specific moving object (e.g., a bus) on the road from images of a camera attached to the head of a user, aiming at developing a system to support daily lives of visually impaired people. The proposed method traces feature points on the images, extracts a moving object region, and detects a bus by applying Haar-like feature and random trees to the region. The effectiveness of the proposed method is shown experimentally.

    DOI: 10.1109/SII.2017.8279323

    Kyutacar

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85050871522&origin=inward

  • Development of a Support System for Human Communication Based on the Visualization of Words 査読有り

    Kazuya Motoyama, Tan Joo Kooi, Ishikawa Seiji

    The 23rd International Symposium on Artificial Life and Robotics 2018   329 - 332   2018年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    日本   大分   2018年01月18日  -  2018年01月20日

  • Bone Erosions Detection on Hand CR Images Based on DCNN 査読有り

    Seiichi Murakami, Kazuhiro Hatano, Huimin Lu, Joo Kooi Tan, Hyoungseop Kim, Takatoshi Aoki

    The 23rd International Symposium on Artificial Life and Robotics 2018/The Third International Sysposium onBioComplexity(ISBC)   357 - 360   2018年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    日本   大分   2018年01月18日  -  2018年01月20日

  • A unified action recognition framework 査読有り

    Amine Ilidrissi, Tan Joo Kooi, Kim Hyoungseop,Ishikawa Seiji

    The 23rd International Symposium on Artificial Life and Robotics 2018   57 - 62   2018年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    日本   大分   2018年01月18日  -  2018年01月20日

  • DCNNによる指骨CR画像からの骨粗しょう症の自動識別

    畠野 和裕, 村上 誠一, 植村 知規, 陸 慧敏, タン ジュークイ, 金 亨燮, 青木 隆敏

    Medical Imaging Technology   36 ( 2 )   90 - 95   2018年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    骨のおもな疾患として,骨粗しょう症が挙げられる.骨粗しょう症に対する画像診断は有効であるが,類似した低骨量を呈する画像も多く,画像診断における客観性や再現性の問題がある.そこで本稿では,指骨computed radiography(CR)画像から骨粗しょう症の自動識別手法を提案する.提案手法では,深層畳み込みニューラルネットワーク(DCNN)を用いた識別器を構築し,骨粗しょう症有無の識別を行う.DCNNの学習および識別には,CR画像から3種類の画像を作成し,各指骨領域内部からROIを抽出後,この3種類のROIをR,G,Bチャンネルに割り当て生成した疑似カラー画像を用いる.実験では,101症例に対し提案手法を適用し,真陽性率(TPR):75.5[%],偽陽性率(FPR):13.9[%]という結果を得た.

    DOI: 10.11409/mit.36.90

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130006588793

  • Describing a Self-occlusive Motion by a Reverse Descriptive Motion History Image

    TAN Joo Kooi, OKAE Sayaka, YAMASHITA Youtaro, ONO Yuta, ISHIKAWA Seiji

    バイオメディカル・ファジィ・システム学会大会講演論文集 ( バイオメディカル・ファジィ・システム学会 )   31 ( 0 )   25 - 26   2018年01月

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)

    <p><i>This paper proposes a new method of describing a human motion based on Motion History Image (MHI) representation. MHI representation is a well-known way of describing a motion by a single image, but it cannot deal with a self-occlusive motion in a proper way. This paper focuses on the motion in depth direction and proposes Reverse Descriptive MHI (RD-MHI). The RD-MHI and the original MHI are both employed for motion recognition, since the former well describes the motion toward an observing camera, whereas the latter away from the camera. Experimental results support the combination in human motion recognition.</i><i><b> </b></i></p>

    DOI: 10.24466/pacbfsa.31.0_25

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130007979978

  • A human care system by a mobile robot employing cooperative objects recognition 査読有り

    Takaki S., Tana J.K., Ishikawa S.

    IEEE Region 10 Annual International Conference, Proceedings/TENCON   2017-December   1148 - 1152   2017年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a system for caring a physically impaired person by a mobile robot employing cooperative object recognition. It is a well-known fact that object recognition is not an easy task for a robot particularly in a real environment. It is, on the contrary, easy for a human. In the proposed system, a mobile robot goes to a spot where a user wishes to go in place of him/her and transfers images containing various objects in the spot. Having seen the images, the user chooses one of those objects and sends a request to the robot to get it. The robot performs the request by taking and bringing it back to the user. As an example, the robot is asked to go to a store, buy goods which a user wants and bring them back to the user. In this process, the goods in the captured images are segmented by the robot and they are recognized and one of them is chosen by the user. In this way, cooperative object recognition is realized to achieve a certain purpose. A laboratory experiment was done with promising results. The idea is different from remote control of a mobile robot, because the robot is autonomous and a user takes a role of only recognizing objects and sending his/her request to the robot.

    DOI: 10.1109/TENCON.2017.8228030

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044202825&origin=inward

  • Automatic identification of circulating tumor cells in fluorescence microscopy images based on AdaBoost 査読有り

    Tsuji K., Lu H., Tan J., Kim H., Yoneda K., Tanaka F.

    International Conference on Control, Automation and Systems   2017-October   1449 - 1454   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. Circulating tumor cells (CTCs) is a useful biomarker for cancer metastasis. The blood from a cancer patient is analyzed by a fluorescence microscope. Each case takes a large number of images, which usually have a lot of cell regions. Thus, analyzing the images is hard work for pathologists, and misdiagnosis may happen. In this paper, we develop an automatic CTCs identification method for fluorescence microscopy images. The proposed method consists of three steps. First, we extract cell regions in images using filtering methods. Second, we compute features of each CTC candidate regions. Finally, we identify the CTCs using AdaBoost algorithm. And we analyze the features to know which ones are effective for characterizing CTCs and normal cells. We apply the proposed method to 5040 microscopy images, and evaluate the effectiveness of our method by using leave-one-out cross validation. We achieve a true positive rate of 97.30 [%] and a false positive rate of 12.82 [%].

    DOI: 10.23919/ICCAS.2017.8204219

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044436784&origin=inward

  • Image registration of vertebral region from CT images based on salient region feature 査読有り

    Sato S., Lu H., Tan J., Kim H., Murakami S., Ueno M., Terasawa T., Aoki T.

    International Conference on Control, Automation and Systems   2017-October   1597 - 1600   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. In recent years, the development of the computer-aided diagnosis (CAD) systems to support radiologist is attracting attention in medical research field. Temporal subtraction technique, which is one of CAD, is a technique to generate images emphasizing temporal changes in lesions by performing a differential operation between current and previous image of the same subject. In this paper, we propose an image registration method for image registration of current and previous image, to generate temporal subtraction images from CT images and enhanced bone metastasis region. The proposed registration method is composed into three main steps: i) segmentation of the region of interest (ROI) using graph cut, ii) use global image matching to select pairs from previous and current image, and iii) final image matching based on salient region feature. We perform our proposed method to synthesis and satisfactory registration experiments. The rotated synthesis image give TP 100.0[%] and FP 12.16[%]. The synthesis image obtained by applying a Gaussian filter give TP 70.40[%] and FP 0.00[%]. The synthesis image obtained by adding artificial pseudo lesion region give TP 99.45[%] and FP 17.89[%]. The synthesis image obtained by adding random noise of 5[%], which gave TP 83.05[%] and FP 16.95[%].

    DOI: 10.23919/ICCAS.2017.8204242

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044465807&origin=inward

  • Image analysis of cleft lip from moire image based on symmetry analysis 査読有り

    Yamada S., Lu H., Tan J., Kim H., Kimura N., Okawachi T., Nozoe E., Nakamura N.

    International Conference on Control, Automation and Systems   2017-October   1586 - 1589   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. About 4% of babies in Japan suffer from congenital anomalies. Cleft lip is the most common disorder among external malformations occurring at a rate of 1 in 500 people. The goal of treatment is to recover symmetric and functional lips and nose forms. However, in the case of unilateral cleft lip where the lips and nose themselves are shifted from the midline of the face, it is difficult to set the midline which is the symmetry axis for evaluating the degree of symmetry of the face. In this paper, we propose an image processing method for extracting the midline using zebra image and shadow image which are two dimensional image created based on three dimensional analysis. Also, we evaluate the degree of asymmetry of the face by using the midline as the axis. As a result of applying the proposed method to 25 cases, which indicated that the proposed method is useful.

    DOI: 10.23919/ICCAS.2017.8204239

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044463365&origin=inward

  • Extraction of spinal candidate region from a dental MR imaging 査読有り

    Suetani K., Lu H., Tan J., Kim H., Tanaka T., Kitou S., Morimoto Y.

    International Conference on Control, Automation and Systems   2017-October   1601 - 1604   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. In recent years, dysphagia is problem among elderly people. Therefore, it is necessary to accurately evaluate swallowing function in order to prevent swallowing disorder beforehand or to detect it early, and it is considered that evaluation of swallowing function using MRI is useful, and the demand for development of CAD system using MRI is increasing. In order to accurately analyze the swallowing motion, automatic extraction of the region of interest by the image analysis method is necessary. In this paper, as a pretreatment of a method for automatic extraction of esophageal region on dental MR image, we develop automatic extraction method of spinal candidate region. We first extract the posterior region of the spine and then extract the intervertebral disc region. Then we perform the detection of the anterior region of the spine and finally we extract the region between the anterior region of the spine and the posterior region of the spine as a spinal region. The proposed method was applied to 5 cases of dental MR images, and the results of TP 91.2 [%] and FP 8.8 [%] are obtained.

    DOI: 10.23919/ICCAS.2017.8204243

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044477204&origin=inward

  • Extraction of GGO regions from chest CT images using deep learning 査読有り

    Hirayama K., Miyake N., Lu H., Tan J., Kim H., Tachibana R., Hirano Y., Kido S.

    International Conference on Control, Automation and Systems   2017-October   351 - 355   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. Lung cancer is the leading cause of death which accounts for the number of deaths in cancer in the world. Early detection and early treatment are regarded as an important. Especially, the ground glass opacity (GGO) is a shadow called pre-cancerous lesion, but it is a shadow which is difficult to detect by a radiologist because of haze and complicated shape. Therefore, in recent years, a computer aided diagnosis (CAD) system has been developed for the purpose of improving the detection accuracy for early detection and reducing the burden to radiologists. In this paper, we extract the GGO using Deep Convolutional Neural Network (DCNN) based on emphasized images. Before detect a GGO region, we apply preprocessing such as isotropic voxel to the original images, and extraction of the lung area. Next, we remove the vessel and bronchial region by 3D line filter based on Hessian matrix, and extract the initial candidate regions using density gradient, volume and sphericity. Subsequently, we segment the candidate regions, extraction of features, and reducing false positive shadows. Finally we create emphasize images and identify with DCNN using those images. As a result of applying the proposed method to 31 cases on Lung Image Database Consortium (LIDC), we obtained a true positive rate (TP) of 86.05 [%] and false positive number (FP) of 4.81 [/case].

    DOI: 10.23919/ICCAS.2017.8204464

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044442253&origin=inward

  • Environment recognition for navigation of autonomous wheelchair from a video image 査読有り

    Nakayama Y., Lu H., Tan J., Kim H.

    International Conference on Control, Automation and Systems   2017-October   1439 - 1443   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. The Japanese population has rapidly aged and the number of aged persons who have lower physical ability has increased recently. Thus the development of medical and healthcare devices is expected. Wheelchair requires care support in most cases. Therefore the development of autonomous wheelchair is meaningful since we can expect to improve convenience and to reduce burden of caregivers. The autonomous wheelchair requires several techniques. Our research is to develop a navigation system based on image processing techniques. However, we assume that the system instructs an appropriate direction to head towards the destination when a wheelchair user comes to a crossing. Incidentally, deep learning, a kind of artificial neural network, has attracted attention in the field of machine learning in recent years. This paper proposes methodology for supporting autonomous driving by use of a classifier trained on a video images with deep learning. Also, we apply visual odometry to generate training data.

    DOI: 10.23919/ICCAS.2017.8204217

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044473264&origin=inward

  • Detection of lung nodules on temporal subtraction images Using 3D sparse coding 査読有り

    Tanaka T., Miyake N., Lu H., Tan J., Kim H., Murakami S., Aoki T., Hirano Y., Kido S.

    International Conference on Control, Automation and Systems   2017-October   1455 - 1457   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. In recent years, the death rate caused by lung cancer is increasing. To detect the lung cancer, multi detector-row computed tomography (MDCT) images are used in visual screening. Lung cancer can be easily detected by using the chest MDCT images, however, it has enormous images and burden to radiologists. Research and development of the computer aided diagnosis (CAD) system have been assisted the diagnosis. As one of the CAD technologies, temporal subtraction technique is possible to emphasize the changing interval on the CT images. It uses subtraction operation between previous and current CT images of the same patient. On the other hand, pattern recognition using image reconstruction by sparse coding method has attracted attention. This technique is mathematically modeling the information processing by the primary visual cortex of human. It is the technique for representing images by the linear combination of a small number of basis. In this paper, candidate nodules under 20[mm] were segmented from temporal subtraction images based on the 3D sparse coding technique.3D sparse coding is three dimensional expansion of the sparse coding. Also, we classified the final candidate nodules using support vector machine (SVM) method based on coefficient matrix which are obtained by the 3D sparse coding. We applied proposed method to 31 cases of chest MDCT images in which the number of nodules was more than one. We achieved experimental result with true positive rates (TPR) of 70.2[[%], and false positive rates (FP) of 34.7[/scan], respectively.

    DOI: 10.23919/ICCAS.2017.8204220

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044482344&origin=inward

  • Automatic liver segmentation from multiphase CT images by using level set method 査読有り

    Saito K., Lu H., Tan J., Kim H., Yamamoto A., Kido S., Tanabe M.

    International Conference on Control, Automation and Systems   2017-October   1590 - 1592   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 Institute of Control, Robotics and Systems - ICROS. Segmentation of liver from Multi-phase CT images is one of the essential technology for computer aided diagnosis. Contrast medium gives multi-phase CT images different intensity feature which enables to detect tumor. It is a challenging problem to segment liver region from multi-phase CT images. There are many approaches for solving this problem, however, these methods depend on other phases or registration. In order to solve this problem, we propose anatomy feature-based method which is mostly independent for each phase in this paper. This method uses level set method for final segmentation. The accuracy of segmentation result by level set methods relay on initial contour, so we preprocess initial region of liver by anatomical feature. Then we introduced contour constrain by using ribs information to improve segmentaion accuracy. Our segmentation was evaluated on 5 multi-phase CT images which have 4 phases. Experimental results show that the proposed method is good accuracy for each phase.

    DOI: 10.23919/ICCAS.2017.8204240

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044443256&origin=inward

  • Classification of osteoporosis from phalanges CR images based on DCNN 査読有り

    Hatano K., Murakami S., Lu H., Kooi Tan J., Kim H., Aoki T.

    International Conference on Control, Automation and Systems   2017-October   1593 - 1596   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Osteoporosis is known as a disease of bone. Visual screening using Computed Radiography (CR) images is an effective method for osteoporosis, however, there are many similar diseases that exhibit state of low bone mass. In this paper, we propose an automatic identification method of osteoporosis from phalanges CR images. In the proposed method, we implement a classifier based on Deep Convolutional Neural Network (DCNN), and identify unknown CR images as normal or abnormal. For training and evaluating of CNN, we use pseudo color images. In the experiment, we apply our proposal method to 101 cases and TPR of 64.7 [%] and FPR of 6.51 [%] were obtained.

    DOI: 10.23919/ICCAS.2017.8204241

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044473373&origin=inward

  • Detection of abnormal candidate regions on temporal subtraction images based on DCNN 査読有り

    Nagao M., Miyake N., Yoshino Y., Lu H., Kooi Tan J., Kim H., Murakami S., Aoki T., Hirano Y., Kido S.

    International Conference on Control, Automation and Systems   2017-October   1444 - 1448   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Cancer is a leading cause of death both in Japan and worldwide. Detection of cancer region in CT images is the most important task to early detection. Recently, visual screening based on CT images become useful tools for cancer detection. However, due to the large number of images and the complexity of the image processing algorithms, image processing technique is still required a high screening quality. To overcome this problem, some computer aided diagnosis (CAD) algorithms are proposed. In this paper, we have designed and developed a framework combining machine learning based on deep convolutional neural networks (DCNN) and temporal subtraction techniques based on non-rigid image registration algorithm. Our main classification method can be built into three main steps; i) pre-processing for image segmentation, ii) image matching for registration, and iii) classification of abnormal regions based on machine learning algorithms. We performed our proposed technique to 25 thoracic MDCT sets and obtained true positive rates of 92.31 [%], false positive rates of 6.32 [/case] were obtained.

    DOI: 10.23919/ICCAS.2017.8204218

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044482182&origin=inward

  • Detection of a Specific Moving Object from Head-mounted Camera Images 査読有り 国際誌

    Ishitobi Katsuma, Tan Joo Kooi, Kim hyoungseop, Ishikawa Seiji

    IEEE/SICE International Symposium on System Integration( SII 2017)   817 - 822   2017年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2017年12月11日  -  2017年12月14日

  • Human motion description and recognition under arbitrary motion direction 査読有り

    Yamashita Y., Tan J., Ishikawa S.

    2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2017   2017-November   110 - 115   2017年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 The Society of Instrument and Control Engineers - SICE. According to the increase in the number of elderly people living alone, nursing care systems for such people have become much more important than ever. This paper focuses its attention on the detection of abnormal motions such as falling of the elderly by use of a camera settled in a room. Although there are several human motion representation and recognition methods, they all deal with the motions acted in a plane perpendicular to camera view, which suggests weakness of their motion description method. This paper proposes a novel method of describing a human motion independent of motion direction. The method expands the original 2-D Motion History Image to a 3-D version and computes the Hu moments in a 3-D way for the motion recognition. The method as well proposes a set of three 2-D images for motion description derived from the 3D-MHI to reduce computational load. The performance of the proposed method is shown experimentally and discussion is given.

    DOI: 10.23919/SICE.2017.8105563

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85044144989&origin=inward

  • A Human Care System by a Mobile Robot Employing Cooperative Objects Recognition 査読有り

    Takaki Soichiro, Tan Joo Kooi, Ishikawa Seiji

    IEEE Region Ten Conference(2017 IEEE TENCON)   1148 - 1152   2017年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Malaysia   Penang   2017年11月05日  -  2017年11月08日

  • Automatic classification of lung nodules on MDCT images with the temporal subtraction technique 査読有り

    Yoshino Y., Miyajima T., Lu H., Tan J., Kim H., Murakami S., Aoki T., Tachibana R., Hirano Y., Kido S.

    International Journal of Computer Assisted Radiology and Surgery   12 ( 10 )   1789 - 1798   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2017, CARS. Purpose: A temporal subtraction (TS) image is obtained by subtracting a previous image, which is warped to match the structures of the previous image and the related current image. The TS technique removes normal structures and enhances interval changes such as new lesions and substitutes in existing abnormalities from a medical image. However, many artifacts remaining on the TS image can be detected as false positives. Method: This paper presents a novel automatic segmentation of lung nodules using the Watershed method, multiscale gradient vector flow snakes and a detection method using the extracted features and classifiers for small lung nodules (20 mm or less). Result: Using the proposed method, we conduct an experiment on 30 thoracic multiple-detector computed tomography cases including 31 small lung nodules. Conclusion: The experimental results indicate the efficiency of our segmentation method.

    DOI: 10.1007/s11548-017-1598-1

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85019139416&origin=inward

  • Automatic identification of circulating tumor cells in fluorescence microscopy images based on AdaBoost 査読有り

    Tsuji K., Lu H., Tan J.K., Kim H., Yoneda K., Tanaka F.

    Proc. of International Conference on Control,Automation and Systems   1449 - 1454   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Extraction of GGO regions from Chest CT image using deep learning 査読有り

    Hirayama K., Miyake N. Lu, H.,Tan J.K., Kim H.,Tachibana R.,Hirano Y.,Kido S.

    2017 International Conference on Control, Automation and Systems   351 - 355   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Automatic identification of circulating tumor cells in fluorescence microscopy images based on ANN 査読有り

    Tsuji K., Lu H., Tan J.K., Kim H., Yoneda K., Tanaka F.

    ACM International Conference Proceeding Series   1 - 6   2017年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Circulating tumor cells (CTCs) are a useful biomarker since they may have some information about cancer metastasis. The blood from cancer patient is analyzed by a fluorescence microscope. It takes a large number of photos for each case, and many cells are contained in the microscopy images. Thus, analyzing them is hard work for pathologists. This work tends to depend on the individual skill of pathologist so misdiagnosis may be happen. In this paper, we develop an automatic CTCs identification method in fluorescence microscopy images based on artificial neural network. We applied our proposed method to 5040 microscopy images (6 cases), and evaluated the effectiveness of our method by using leave-one-out cross validation. We achieve a true positive rate of 98.65 [%] and a false positive rate of 18.24 [%].

    DOI: 10.1145/3133793.3133798

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85052677091&origin=inward

  • Automatic identification of circulating tumor cells in fluorescence microscopy images based on ANN 査読有り

    Tsuji K., Lu H., Tan J.K., Kim H., Yoneda K., Tanaka F.

    Proc. of International Conference on Biomedical Signal and Image Processing   1 - 6   2017年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • DCNNによるLIDCデータからのすりガラス陰影の検出

    平山一希, 陸 慧敏, タン ジュークイ, 金 亨燮, 橘 理恵, 平野 靖,木戸 尚治

    医用画像情報学会雑誌   34 ( 2 )   70 - 74   2017年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • An ego-camera based finger-spelling recognition system 査読有り

    Tan J., Hamada S., Hirakawa M., Kim H., Ishikawa S.

    IEEE Region 10 Annual International Conference, Proceedings/TENCON   358 - 363   2017年02月

     詳細を見る

    担当区分:筆頭著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2016 IEEE. This paper describes a portable system for finger-spelling recognition, employing a PC and a video taken by an ego-camera mounted on the body of a person who does finger-spelling. The system is intended to be a useful tool for an orally impaired person to communicate to anyone at any place by carrying it with him/her. The images of the finger-spelling hand of a user who is carrying the system is captured by an ego-camera. The hand is extracted from arbitrary backgrounds by use of a Gaussian mixture model and skin color evaluation: The trimmed and normalized image of the extracted hand is recognized employing the feature space defined by applying the principal component analysis to the learning data containing 45 finger-spelled Japanese Hiragana letters each with 50 samples and the nearest neighbor method. The on-line performance of the proposed system is experimentally shown.

    DOI: 10.1109/TENCON.2016.7848021

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85015432805&origin=inward

  • Extraction of GGO candidate regions from the LIDC database using deep learning 査読有り

    Hirayama K., Tan J., Kim H.

    International Conference on Control, Automation and Systems   724 - 727   2017年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2016 Institute of Control, Robotics and Systems - ICROS. In recent years, development of the computer-aided diagnosis (CAD) systems for the purpose of reducing the false positive on visual screening and improving accuracy of lesion detection has been advanced. Lung cancer is the leading cause of cancer death in the world. Among them, GGO (Ground Glass Opacity) that exhibited early in the before cancer lesion and carcinoma in situ shows a pale concentration, have been concerned about the possibility of undetected on the screening. In this paper, we propose an automatic extraction method of GGO candidate regions from the chest CT image. Our proposed image processing algorithms is consist of four main steps; 1) segmentation of volume of interest from the chest CT image and removing the blood vessel regions, bronchus regions based on 3D line filter, 2) first detection of GGO regions based on density and gradient which is selected the initial GGO candidate regions, 3) identification of the final GGO candidate regions based on DCNN (Deep Convolutional Neural Network) algorithms. Finally, we calculates the statistical features for reducing the false-positive (FP) shadow by the rule-based method, performs identification of the final GGO candidate regions by SVM (Support Vector Machine). Our proposed method performed on to the 31 cases of the LIDC (Lung Image Database Consortium) database, and final identification performance of TP: 93.02[%] , FP: 128.52[/case] are obtained respectively.

    DOI: 10.1109/ICCAS.2016.7832398

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85014043688&origin=inward

  • Automatic segmentation of cell candidate regions in microscopy images based on an optimization algorithm 査読有り

    Tsuji K., Tan J., Kim H., Yoneda K., Tanaka F.

    International Conference on Control, Automation and Systems   720 - 723   2017年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2016 Institute of Control, Robotics and Systems - ICROS. Circulating tumor cells (CTCs) is an informative biomarker which assists pathologists in early diagnosis and evaluating therapeutic effects of patients with malignant tumors. The blood from a cancer patient is analyzed by a microscope and a large number of pictures including many cells are generated for each case. Thus, analyzing them is time-consuming work for pathologists, and misdiagnosis may happen since the diagnosis of CTCs tends to depend on the individual skill of pathologist. In this paper, we propose a method which detects cell candidate regions in microscopy images automatically to make quantitative analysis possible by computer. Our proposed method consists of three steps. In the first step, we extract initial cell candidate regions in microscopy images based on the saliency map. In the second step, we choose non-single cell regions from the initial candidates based on the SVM algorithm. In the third step, we separate connected regions into single cell regions based on the branch and bound algorithm. We demonstrated the effectiveness of our proposed method using 540 microscopy images and we achieved a true positive rate of 99.04[%] and a false positive rate of 3.95[%] .

    DOI: 10.1109/ICCAS.2016.7832397

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85014007441&origin=inward

  • On detecting sudden rush-out pedestrians 査読有り

    Ono Yuta, Tan Joo Kooi, Kim Hyoungseop,Ishikawa Seiji

    The Twenty-Second International Symposium on Artificial Life and Robotics 2017   446 - 449   2017年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    日本   大分   2017年01月19日  -  2017年01月21日

  • Moving objects detection employing iterative update of the background 査読有り

    F.X.Arinto S. Tan, J.K., Kim, H., Ishikawa S.

    Artificial Life and Robotics, Springer   22 ( 2 )   168 - 174   2017年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Detection of objects from a video is one of the basic issues in computer vision study. It is obvious that moving objects detection is particularly important, since they are those to which one should pay attention in walking, running, or driving a car. This paper proposes a method of detecting moving objects from a video as foreground objects by inferring backgrounds frame by frame. The proposed method can cope with various changes of a scene including large dynamical change of a scene in a video taken by a stationary/moving camera. Experimental results show satisfactory performance of the proposed method.

    DOI: 10.1007/s10015-016-0347-9

    Kyutacar

    Scopus

    その他リンク: https://link.springer.com/article/10.1007/s10015-016-0347-9

  • 細胞領域の論理積を用いた蛍光顕微鏡画像からの血中循環がん細胞の自動検出

    辻 幸喜, 陸 慧敏, タン ジュークイ, 金 亨燮, 米田 和恵, 田中 文啓

    医用画像情報学会雑誌   34 ( 4 )   151 - 155   2017年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    <p>Circulating tumor cells(CTCs)can be a useful biomarker. They may have some information about the malignant disease, since they are one of causes of the cancer metastasis. The blood sample from cancer patient is analyzed by fluorescence microscope. This microscope takes enlarged images with three types of lights(red, green and blue),and specific materials are reacted respectively. The blood contains a lot of cells, but there are few CTCs. Therefore analyzing them is not easy work for pathologists. In this study, we develop a method which detects circulating tumor cells in fluorescence microscopy images automatically. Our proposed method has three steps. First, we extract cell regions in microscopy images by using filtering processing. Second, we separate the connecting cell regions into single cell regions,based on the branch and bound algorithm. Finally, we identify CTCs by using logical conjunction method. We demonstrated the effectiveness of our proposed method using 6 cases(5040 microscopy images), and we evaluated the performance of CTCs identification. Our proposed method achieved, a true positive rate of 95.27 [%] and a false positive rate of 6.172 [%] respectively. And we confirmed the effectiveness of the logical conjinction for CTCs identicication.</p>

    DOI: 10.11318/mii.34.151

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130006267703

  • DCNNによるLIDCデータからのすりガラス状陰影の検出 査読有り

    平山 一希, 陸 慧敏, タン ジュークイ, 金 亨燮, 橘 理恵, 平野 靖, 木戸 尚治

    医用画像情報学会雑誌   34 ( 2 )   70 - 74   2017年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    <p>Lung cancer is one of the most important cancer in the world. Among them, Ground Glass Opacity(GGO)has a hazy area of increased attenuation in the lung image. In recent years, development of a Computer Aided Diagnosis (CAD)system for reducing the burden on work load and improving the detection rate of lesions has been advanced. In this paper, we propose a CAD system to extract GGO from CT images. Firstly, we extract the lung region from the input CT images and remove the vessel, and bronchial region based on 3 D line filter algorithm. After that, we extract initial GGO regions using concentration and gradient information. Next, we calculate the statistical features on the segmented regions. After that, we classify GGO regions using support vector machine(SVM). Finally, we detect the final GGO regions using deep convolutional neural network(DCNN). The proposed method is tested on 31 cases of CT images from the Lung Image Database Consortium(LIDC). The results demonstrate that the proposed method has 86.05[%] of true positive rate and 39.03[/case] of false positive number.</p>

    DOI: 10.11318/mii.34.70

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130006846732

  • An Ego-camera based finger-spelling recognition system 査読有り

    Tan Joo Kooi, Hamada Satoshi , Hirakawa Manabu, Kim Hyoungseop, Ishikawa Seiji

    IEEE Region Ten Conference(2016 IEEE TENCON)   359 - 364   2016年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2016年11月22日  -  2016年11月25日

  • Identify rheumatoid arthritis and osteoporosis from phalange CR images based on image registration and ANN 査読有り

    Kajihara S., Murakami S., Tan J., Kim H., Aoki T.

    ICIC Express Letters   10 ( 10 )   2435 - 2440   2016年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2016 ICIC International. Rheumatoid arthritis and osteoporosis are two of the major diseases related to the phalanges. Diagnostic imaging is often used to diagnosis them. Especially, observing the temporal changes of the shape or internal structure of the phalanges in phalange CR images is an effective way to detect these diseases. However, there are various problems in image diagnosis, such as the evaluation of diagnosis is generally empiric, and the burden of reading images is heavy. In order to solve these problems, in this paper, we develop a computer aided diagnosis (CAD) system for automatically diagnosing, which includes segmentation of knuckles, registration of temporal images and features analysis of phalange regions. In the segmentation part, we perform the multi scale gradient vector flow (MSGVF) snakes to improve the precision of extracted results. Also we have developed an image registration technique based on salient region features (SRF) method. Two image features are used to train the artificial neural network (ANN) to identify the abnormal knuckle regions.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84991355197&origin=inward

  • A method of detecting salient regions employing global and local saliency 査読有り

    Kuwata I., Kooi Tan J., Kim H., Ishikawa S.

    ICIC Express Letters, Part B: Applications   7 ( 3 )   555 - 561   2016年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2016. This paper proposes a method of detecting prominent regions from an image using a global and local saliency measure. Detection of prominent regions from an image is important, because it is expected to improve the precision and the processing speed of object recognition through image processing. The proposed method employs two spatial redundancies, global saliency and local saliency. Global saliency is calculated by comparing the value of an interested pixel with the mode of the values of all pixels. On the other hand, local saliency is calculated using the relation among local pixels. In calculating the local saliency, the proposed method employs human visual characteristics, i.e., complementary color harmony to detect salient regions. Saliency map is made by integrating the global saliency and the local saliency considering some weights. Experimental results show the effectiveness of the proposed method.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84975702036&origin=inward

  • A method of detecting salient regions employing global and local saliency 査読有り

    Kuwata I., Tan J.K., Kim H., Ishikawa S.

    ICIC Express Letters Part B: Application   7 ( 3 )   555 - 561   2016年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Detecting pedestrian and extracting their attributes from self-mounted camera views 査読有り

    Sakai R., Tan J.K., Kim H., Ishikawa S.

    ICIC Express Letters, Part B: Applications   7 ( 2 )   279 - 286   2016年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2016 ICIC International. There are not a few visually impaired people in our society. When they want to go for a walk, it is difficult for them to perceive moving objects around them. When they walk on the sidewalk, they must pay strong attention to those coming closer to them or to moving objects. Therefore, it is indispensable for them to obtain the information on moving objects around them. This paper proposes a method of extracting moving objects, in particular, pedestrians, from self-wearable camera images. In the proposed system, a user is supposed to wear a camera and a PC and, by performing image analysis of the video taken from the camera, it acquires surrounding pedestrians’ information including their peculiar characteristics. The information is fed back to the user to realize his/her safe walk. Experimental results are shown and the performance of the proposed method is evaluated. The proposed system is primarily intended to be used by visually impaired people, but it may also be used by any pedestrian who is not paying much attention to his/her surroundings.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84959361584&origin=inward

  • Traffic signs and signals detection employing the my vision system for a visually impaired person 査読有り

    Kumano T., Kooi Tan J., Kim H., Ishikawa S.

    ICIC Express Letters, Part B: Applications   7 ( 2 )   385 - 391   2016年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2016 ICIC International. In recent years, the equipment on walk support for visually impaired persons has spread to a certain extent. A studded paving block and a sound-type signal are set on the sidewalk or on the road in many places. However, there are still problems such that the studded paving block is still installed at limited places, or the sound of the signal is obscure because of the environmental noise such as roaring traffic or heavy rain. Therefore, a system that can support visually impaired persons in more effective way is necessary, such as a system which finds a signal or a traffic sign automatically and provides its information to the visually impaired. In order to realize such a system, this paper proposes a method of detecting pedestrian signals and crosswalk signs by a camera and a computer which we call MY VISION system. In the proposed method, color information is used in the first step to restrict the search for traffic signal and sign candidates, and then the HOG feature is introduced to describe the candidates by feature vectors. Recognition of the signals and the signs is performed by applying randomized trees to the candidates. The color sign of the pedestrian signal is also discriminated by using color information. Experimental results are shown and the method is evaluated.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84959357672&origin=inward

  • SpatioTemporal LBP and shape feature for human activity representation and recognition 査読有り

    Sk. Md. Masudul Ahsan, Tan J.K., Kim H., Ishikawa S.

    International Journal of Innovative Computing, Information and Control   12 ( 1 )   1 - 13   2016年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Traffic signs and signals detection employing the MY VISION system for a visually impaired person 査読有り

    Kumano T., Tan J.K., Kim H., Ishikawa S.

    ICIC Express Letters Part B: Application   7 ( 2 )   385 - 399   2016年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Detecting pedestrian and extracting their attributes from self-mounted camera views 査読有り

    Sakai R., Tan J.K., Kim H., Ishikawa S.

    ICIC Express Letters Part B: Application   7 ( 2 )   279 - 286   2016年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Boundary aware regional contrast based visual saliency detection 査読有り

    Sk. Md. M. Ahsan, Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of The Twenty-First International Symposium on Artificial Life and Robotics 2016   258 - 262   2016年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2016年01月20日  -  2016年01月22日

  • Moving objects detection from dynamic backgrounds 査読有り

    FX A Setyawan, Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of The Twenty-First International Symposium on Artificial Life and Robotics 2016   254 - 257   2016年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2016年01月20日  -  2016年01月22日

  • Detection method for candidate regions of ground glass opacity on LIDC database using image features 査読有り

    Yokota K., Kim H., Tan J., Ishikawa S., Tachibana R., Hirano Y., Kido S., Aoki T.

    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers   70 ( 8 )   J178 - J184   2016年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    本稿では読影医師への負担軽減と読影精度の向上を目的とし,LIDCデータベース上の胸部CT画像からのGGO候補領域の自動抽出法を提案する.手法としては,まず肺野領域の抽出を行い,得られた肺野領域に対し,3D Line Filterによる血管・気管支領域の除去を行う.その後,濃度・勾配閾値処理により初期GGO候補領域を抽出する.そして初期GGO候補領域のセグメンテーションを行い,統計的特徴量を算出する.最後に,特徴量を基に識別器を構築し,最終的なGGO候補領域を決定する.本稿では,識別器としてニューラルネットワークとサポートベクターマシンの2種類を用い,両者の識別性能を比較する.実験では,提案法をLIDCデータベース上の胸部CT画像31症例に適用し,その結果に対する考察と有用性を述べる.

    DOI: 10.3169/itej.70.J178

    Scopus

    CiNii Article

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84979211524&origin=inward

  • 高齢者見守りのための 3D-MHI を用いた人の動作認識

    山下 陽太郎, タン ジュークイ, 金 亨燮, 石川 聖二

    バイオメディカル・ファジィ・システム学会大会講演論文集 ( バイオメディカル・ファジィ・システム学会 )   29 ( 0 )   33 - 36   2016年01月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)

    <p><i>Nowadays the population of elderly people who live alone has been growing continuously. When they encounter an accident at home, they won't be able to ask for help in the worst case. Under these circumstances, development of an automatic system to care elderly people living alone becomes increasingly important. We have been developing a computer vision system which finds abnormal motions of elderly people among their daily activities indoors. In this paper, we propose a new method of human motion representation called 3D-MHIs </i>(<i>3-dimensional-Motion History Image</i>)<i> based on FoE (Focus of Expansion) and a method of human motion recognition employing 3-D Hu moments by use of a single camera view. The method can recognize not only those motions parallel to a camera lens but also those motions toward depth direction. The effectiveness of the proposed method was experimentally shown.</i><i> </i></p>

    DOI: 10.24466/pacbfsa.29.0_33

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130007979895

  • Automatic segmentation of phalanges regions on MR images based on MSGVF snakes 査読有り

    Shigeyoshi K., Murakami S., Kim H., Tan J., Ishikawa S.

    ICCAS 2015 - 2015 15th International Conference on Control, Automation and Systems, Proceedings   1547 - 1550   2015年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2015 Institute of Control, Robotics and Systems - ICROS. In recent years, medical imaging is important in medical diagnosis for early detection of lesions. However, a large number of images increases the stress to the radiologist. Therefore, CAD (Computer Aided Diagnosis) system is expected to reduce such burden.

    DOI: 10.1109/ICCAS.2015.7364602

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84966322344&origin=inward

  • Development of image viewer for analyzing of temporal subtraction from chest CT images 査読有り

    Kondo M., Yoshino Y., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T., Tachibana R., Hirano Y., Kido S.

    ICCAS 2015 - 2015 15th International Conference on Control, Automation and Systems, Proceedings   1543 - 1546   2015年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2015 Institute of Control, Robotics and Systems - ICROS. Recently, CT (Computed Tomography) scanner is used for detecting the abnormalities because CT scanner gradually becomes high resolution and high speed. However, with the improvement of the resolution of CT images, the number of CT images becomes huge. Therefore, radiologists have to analyze huge number of images and they sometimes misdiagnoses are happened. Hence, to deal with this problem the CAD (Computer Aided Diagnosis) system is developed. One of the CAD systems called temporal subtraction technique is useful to detect abnormalities in medical field. There is no viewer system which displays abnormal region using temporal subtraction technique. In this paper, we develop a novel temporal subtraction technique to help the radiologists to reduce diagnostic ti me and improve the diagnostic accuracy.

    DOI: 10.1109/ICCAS.2015.7364601

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84966267281&origin=inward

  • Development of a supporting system for visual inspection of IGBT device based on statistical feature and complex multi-resolution analysis 査読有り

    Yuki D., Kim H., Tan J., Ishikawa S., Tsukuda M., Omura I.

    ICCAS 2015 - 2015 15th International Conference on Control, Automation and Systems, Proceedings   1551 - 1554   2015年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2015 Institute of Control, Robotics and Systems - ICROS. Recently, the necessity of environmental regulation, low fuel consumption, and natural energy development is proposed by environmental issues. So the demands of power transistor devices are increased. But measurement technique of the current distribution is not keeping up with further miniaturized and integrated were needed in present condition. Now, therefore, ensuring security attended high functionalization is a subject. IGBT (Insulated Gate Bipolar Transistor) is the device that used for wide range of power devices. So we are developing imaging system used non-contact sensor arrays aimed to IGBT production line. In this paper, we propose a development of a supporting system for visual inspection of IGBT device based on statistical feature and complex multi-resolution analysis. First, this performs signal de-noising after entering well-known good data and measured data. Second, the statistical feature is expressed the difference between good data and measured data are calculated. Last, classifying of good and inferiority is performed based on the result of threshold processing. In the paper, we applied our algorithm to 28 sample data including 20 good data and 8 inferiority data.

    DOI: 10.1109/ICCAS.2015.7364603

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84966269121&origin=inward

  • Posture estimation from Kinect image using RVM regression analysis 査読有り

    Fujimura H., Kim H., Tan J., Ishikawa S.

    ICCAS 2015 - 2015 15th International Conference on Control, Automation and Systems, Proceedings   1540 - 1542   2015年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2015 Institute of Control, Robotics and Systems - ICROS. Kinect is always used as a device to estimate posture. However, there are difficult to estimate the posture in the case of using a Kinect only. Therefore, we propose a method to estimate more accurately posture by synthesizing the posture obtained by Kinect and estimated by the regression analysis. In the regression analysis, we associate the HOG features and joint parameters that consists of 20 coordinate points. Posture data used for learning of the regression model is used difficult posture be obtained with Kinect. Similarity in brightness between frames at around each joint of the skeleton obtained by regression analysis and Kinect is calculated. Then we synthesize the posture by calculating a weighted average. In addition, RVM regression model is used to improve the accuracy of representing the posture by the proposed method.

    DOI: 10.1109/ICCAS.2015.7364600

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84966351140&origin=inward

  • Human action representation and recognition: An approach to histogram od spatiotemporal templates 査読有り

    Sk. Md. Masudul Ahsan, Tan J.K., Kim H., Ishikawa S.

    International Journal of Innovative Computing, Information and Control   11 ( 6 )   1855 - 1867   2015年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Comparison of Feature Extraction Methods for Head Recognition 査読有り

    Mudjirahardjo P., Tan J. K., Kim H., Ishikawa S.

    IEEE Proceedings of International Electronics Symposium (IES2015)   118 - 122   5Pages   2015年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Indonesia   Surabaya   2015年09月29日  -  2015年09月30日

    DOI: 10.1109/ELECSYM.2015.7380826

    Scopus

  • Detection of prominent regions using global and local saliency measurements 査読有り

    Kuwata I., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of The Tenth International Conference on Innovative Computing, Information and Control (ICICIC2015)   1 - 4   4Pages   2015年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2015年08月20日  -  2015年08月22日

  • A method of traffic sign detection from ego camera images 査読有り

    Kumano, T., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of The Tenth International Conference on Innovative Computing, Information and Control (ICICIC2015)   1 - 4   4Pages   2015年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2015年08月20日  -  2015年08月22日

  • Pedestrians detection and extracting their attributes by a Self-wearable camera 査読有り

    Sakai, R., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of The Tenth International Conference on Innovative Computing, Information and Control (ICICIC2015)   1 - 4   4Pages   2015年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2015年08月20日  -  2015年08月22日

  • 2段階AdaBoostを用いた経時的差分像からの結節状候補陰影の識別

    田中 修司, 金 亨燮, タン ジュークイ, 石川 聖二, 村上 誠一, 青木 隆敏, 平野 靖, 木戸 尚治, 橘 理恵

    バイオメディカル・ファジィ・システム学会誌   17 ( 1 )   9 - 16   2015年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    近年,肺癌による死亡者数の増加から,病変部の早期発見・治療が重要視されている.そのため,胸部MDCT画像を用いた精密検査などが進められている.MDCT画像は,癌検出が容易である反面,読影を行う画像枚数が多く,医師への負担増が懸念されている.そこでコンピュータ支援診断システムによる,読影医師への負担軽減などが期待されている.このシステムの一つとして,経時的変化を強調する経時的差分像技術がある.この技術より得られた差分像から結節状候補陰影を検出するための研究が行われ,医師への診断支援の実現が期待されている.本稿では,胸部MDCT画像の経時的差分像より直径20[mm]以下の結節状候補陰影を抽出し,特徴量解析に基づく2段階AdaBoostによる病変候補陰影を識別するためのシステムの開発を行い,実画像による性能評価を行ったところ,96.8[%]の識別率を得た.

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/110009987542

  • Identification of a person based on gait recognition 査読有り

    Murayama J., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of SICE Annual Conference 2015   84 - 88   2015年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Hangzhou   2015年07月28日  -  2015年07月30日

  • Detecting a taxi from a video for visually handicapped people 査読有り

    Nishimura A., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of SICE Annual Conference 2015   89 - 92   2015年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Hangzhou   2015年07月28日  -  2015年07月30日

    Kyutacar

  • MEMS慣性計測装置を用いた高精度リアルタイム補正音響測深システム 査読有り

    丹 康弘,タン ジュークイ,石川聖二

    日本船舶海洋工学会論文集   21   75 - 81   2015年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Temporal analysis for fast motion detection in crowd 査読有り

    Panca, M., Tan J.K., Kim H., Ishikawa S.

    Journal of Artificial Life Robotics(Springer)   20 ( 1 )   56 - 61   2015年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s10015-014-0195-4

    Scopus

  • Human action representation and recognition: An approach to a histogram of spatiotemporal templates 査読有り

    Ahsan S., Tan J., Kim H., Ishikawa S.

    International Journal of Innovative Computing, Information and Control   11 ( 6 )   1855 - 1867   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2015. The motion sequences of human actions have its own discriminating profile that can be represented as a spatiotemporal template like Motion History Image (MHI). A histogram is a popular statistic to present the underlying information in a template. In this paper a histogram oriented action recognition method is presented. In the proposed method, we use the Directional Motion History Images (DMHI), their corresponding Local Binary Pattern (LBP) images and the Motion Energy Image (MEI) as spatiotemporal template. The intensity histogram is then extracted from those images which are concatenated together to form the feature vector for action representation. A linear combination of the histograms taken from DMHIs and LBP images is used in the experiment. We evaluated the performance of the proposed method along with some variants of it using the renowned KTH action dataset and found higher accuracies. The obtained results justify the superiority of the proposed method compared to other approaches for action recognition found in literature.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84957063027&origin=inward

  • Groupwise surface correspondence using particle filtering 査読有り

    Li G., Kim H., Tan J., Ishikawa S.

    Proceedings of SPIE - The International Society for Optical Engineering   9443   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2015 SPIE. To obtain an effective interpretation of organic shape using statistical shape models (SSMs), the correspondence of the landmarks through all the training samples is the most challenging part in model building. In this study, a coarse-tofine groupwise correspondence method for 3-D polygonal surfaces is proposed. We manipulate a reference model in advance. Then all the training samples are mapped to a unified spherical parameter space. According to the positions of landmarks of the reference model, the candidate regions for correspondence are chosen. Finally we refine the perceptually correct correspondences between landmarks using particle filter algorithm, where the likelihood of local surface features are introduced as the criterion. The proposed method was performed on the correspondence of 9 cases of left lung training samples. Experimental results show the proposed method is flexible and under-constrained.

    DOI: 10.1117/12.2179122

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84925431291&origin=inward

  • Human detection based on salient region 査読有り

    Gu, T. L., Tan, J. K., Kim, H., Ishikawa S.

    Proceedings of 2015 Joint Conference of IWAIT and IFMIA   1 - 4   CD-Rom: 4 Pages   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Tainan   2015年01月11日  -  2015年01月13日

  • Detecting moving objects on a video having a dynamic background 査読有り

    Setyawan F. X. A., Tan J. K., Kim H., Ishikawa S.

    The proceedings of International Conference on Artificial Life and Robotics ICAROB 2015   27 - 31   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Ooita   2015年01月10日  -  2015年01月12日

  • Fast motion detection based on correlation 査読有り

    Mudjirahardjo P., Tan J. K., Kim H. S., Ishikawa S.

    The proceedings of International Conference on Artificial Life and Robotics ICAROB 2015   22 - 26   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Ooita   2015年01月10日  -  2015年01月12日

  • スケール不変な顕著領域特徴に基づくCR画像からの指骨領域の自動位置合わせ

    梶原 将太, 村上 誠一, 金 亨燮, タン ジュークイ, 石川 聖二

    バイオメディカル・ファジィ・システム学会誌   17 ( 2 )   35 - 42   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    関節リウマチは代表的な骨疾患で,病態が進行すると関節の変形や機能障害が生じ患者のQOL(quality of life)を著しく低下させる.これらの診断には画像診断が有効であるが,医師の主観的な評価による診断精度のばらつきや,画像枚数の増加に伴う医師への負担増加などの問題が現存するため,これらの問題を克服し医師の負担を軽減することが重要な課題である.手のCR画像から定量的な評価支援を行うためのコンピュータ支援診断システムの必要性に応えるため,本稿では,手のCR画像からの指骨領域の自動位置合わせ法を提案する.指骨領域は同一被験者の過去と現在の手のCR画像から指骨領域をそれぞれ自動抽出したものを用いる.位置合わせ法としては,指骨領域の関心領域上のエントロピーに基づく顕著領域特徴を求め,過去と現在画像上の顕著領域特徴間の関係性から最適な変形量を導出し,剛体変形を与えることにより,両画像の位置合わせを行い,経時的な変化部分を検出する.提案手法を3症例の過去と現在画像に適用し,良好な結果を得た.

    DOI: 10.24466/jbfsa.17.2_35

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/110010047578

  • MSGVF SnakesによるMR画像からの指骨領域の自動抽出(視聴覚の基礎と応用,マルチモーダル,感性情報処理,一般)

    重吉 功嗣, 村上 誠一, 金 亨燮, タン ジュー クイ, 石川 聖二

    映像情報メディア学会技術報告   39 ( 0 )   11 - 14   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    手の骨の疾患,特に指骨の疾患における画像診断では,主に骨を直接撮影可能なX線やCTなどが用いられ,骨のすりへりや破壊の有無を診断する.しかし,MRIを用いた画像診断により,CTなどでは観察が困難な症状の診断に有効な場合があり,早期発見・診断が期待されている.このような背景から,近年の医療診断における画像診断の占める割合は向上しているが,画像診断では,医師による診断結果のばらつきや,画像枚数の増大による医師への負担増加が懸念されている.そのため,定量的な解析を行うための,コンピュータ支援診断システムが必要となる.本論文では,手のMR画像からの指骨の疾患の定量的な評価を行うための前段階としてMSGVF Snakesによる指骨領域のセグメンテーション手法を提案する.

    DOI: 10.11485/itetr.39.43.0_11

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/110010015999

  • SVMによる胸部CT画像からのすりガラス状候補陰影の検出

    金 亨燮, 横田 佳祐, タン ジュークイ, 石川 聖二, 橘 理恵, 平野 靖, 木戸 尚冶, 青木 隆敏

    映像情報メディア学会年次大会講演予稿集   2015 ( 0 )   31B - 2   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    In this paper, we develop a CAD (Computer Aided Diagnosis) system for assisting in visual screening. In the first stage, we calculate the statistical features such as density, shape and textures on segmented candidate regions. Finally, we implement a classifier based on support vector machine to distinguish final candidate regions. We applied the proposed method to 31 CT image sets in the Lung Image Database Consortium (LIDC) which is supplied by National Center Institute (NCI).

    DOI: 10.11485/iteac.2015.0_31B-2

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130006742266

  • 濃度勾配特徴を用いた下肢CTA画像からの血管領域の抽出

    金 亨燮, 和田 幸大, タン ジュークイ, 石川 聖二, 山本 晃義

    映像情報メディア学会年次大会講演予稿集   2015 ( 0 )   31B - 1   2015年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    In the field for visual screening, it is difficult task to detect blood vessel regions with high accuracy detection rates since blood vessel regions are in contact with bone region on the CTA images. In this paper, we propose a new method for segmentation of arterial area, which are obtained CTA based on gradient features, and satisfactory experimental results are obtained. Some experimental results are shown with discussion.

    DOI: 10.11485/iteac.2015.0_31B-1

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130006742267

  • Histogram of spatio temporal local binary patterns for human action recognition 査読有り

    Sk, Md. M. Ahsan, Tan, J. K., Kim, H., Ishikawa S.

    Proceedings of Joint6th International Conference on Soft Computing and Intelligent Systems and 15th International Symposium on Advanced Intelligent Systems (SCIS&ISIS2014)   1007 - 1011   2014年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kitakyushu   2014年12月03日  -  2014年12月06日

    DOI: 10.1109/SCIS-ISIS.2014.7044905

    Scopus

  • A pedestrian detection method using the extension of the HOG feature 査読有り

    Nakashima, Y., Tan, J. K., Kim, H., Ishikawa S.

    Proceedings of Joint6th International Conference on Soft Computing and Intelligent Systems and 15th International Symposium on Advanced Intelligent Systems (SCIS&ISIS2014)   1198 - 1202   2014年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kitakyushu   2014年12月03日  -  2014年12月06日

    DOI: 10.1109/SCIS-ISIS.2014.7044743

    Kyutacar

    Scopus

  • Human detection using a salient region 査読有り

    Gu, T. L., Tan, J. K., Kim, H., Ishikawa S.

    Proceedings of Joint6th International Conference on Soft Computing and Intelligent Systems and 15th International Symposium on Advanced Intelligent Systems (SCIS&ISIS2014)   1589 - 1590   2014年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kitakyushu   2014年12月03日  -  2014年12月06日

  • Comparing effectiveness of feature detectors in obstacles detection from video 査読有り

    Qian, S., Tan, J.K., Kim, H., Ishikawa, S.

    Journal of Robotics, Networking and Artificial Life   1 ( 3 )   184 - 188   2014年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.2991/jrnal.2014.1.3.3

    Kyutacar

  • Detecting moving objects from a video taken by a moving camera using sequential inference of background images 査読有り

    Setyawan F., Tan J., Kim H., Ishikawa S.

    Artificial Life and Robotics   19 ( 3 )   291 - 298   2014年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2014, ISAROB. This paper proposes a method of detecting moving objects using sequential inference of the background in a video taken with a moving camera. In the video taken using a moving camera, all positions of pixels change every frame. The positions of the background pixels in the image frame T are not the same as the positions of the background pixels in the image frame T + 1. 2D projective transform can be used to find changes in the pixel position every frame. Bilinear interpolation with four nearest pixels around the pixel in image frame T which corresponds to a pixel in the image frame T+1 can be used for creating a background model at T + 1. Having obtained the background model, a pixel in image frame T + 1 can be determined if it is a background pixel or a foreground pixel. The detection results of the proposed method are compared with the ground truth to determine the effectiveness of the proposed method.

    DOI: 10.1007/s10015-014-0168-7

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84939889507&origin=inward

  • Histogram of DMHI and LBP images to represent human actions 査読有り

    Sk, Md. M. Ahsan, Tan, J. K., Kim, H., Ishikawa S.

    Proceedings of IEEE International conference on Images Processing(ICIP2014)   1440 - 1444   2014年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    France   Paris   2014年10月27日  -  2014年10月30日

    DOI: 10.1109/ICIP.2014.7025288

    Scopus

  • A Visualization System of Scaler Stroke Motion 査読有り

    Kuroiwa, M., Tan, J.K., Kim, H., Ishikawa, S.

    Journal of Biomedical Soft Computing and Human Sciences   19 ( 2 )   19 - 26   2014年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

    CiNii Article

  • Multiple-window bag of features for road environment recognition 査読有り

    Morita, S., Tan, J.K., Kim, H., Ishikawa, S.

    Journal of Robotics, Networking and Artificial Life   1 ( 2 )   160 - 163   2014年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.2991/jrnal.2014.1.2.13

    Kyutacar

  • Head detection and tracking for an intelligent room 査読有り

    Mudjirahardjo P., Tan J. K., Kim H., Ishikawa S.

    Proceedings of SICE Annual Conference 2014   353 - 358   2014年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Hakaido   2014年09月09日  -  2014年09月12日

    Kyutacar

  • A saliency detection technique considering self- and mutual-information 査読有り

    Boudissa A., Tan J.K., Kim H., Shinomiya T., Ishikawa S.

    Journal of Biomedical Soft Computing and Human Sciences   19 ( 1 )   69 - 73   2014年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Videotaped obstacle extraction from a moving camera 査読有り

    Qian, S., Tan, J.K., Kim, H., Ishikawa, S., Morie, T., Shinomiya, T.

    International Journal of Innovative Computing, Information and Control   10 ( 2 )   717 - 728   2014年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Detection of a bicycle in video images using MSC-HOG feature 査読有り

    Jung, H., Ehara, Y., Tan, J.K., Kim, H., Ishikawa, S.

    International Journal of Innovative Computing, Information and Control   10 ( 2 )   521 - 533   2014年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • M-HOG特徴量と色相の共起情報を用いた人検出法 査読有り

    中島佑樹,タン ジュークイ,金 亨燮,森江 隆,石川聖二

    バイオメディカル・ファジィ・システム学会誌   16 ( 1 )   67 - 74   2014年04月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    Kyutacar

    CiNii Article

  • Detection and tracking of a human on a bicycle using HOG feature and particle filter 査読有り

    Jung, H., Tan, J.K., Kim, H., Morie, T., Ishikawa, S.

    International Journal of Biomedical Soft Computing and Human Sciences   19 ( 1 )   33 - 41   2014年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • Classifying seabed sediments using local auto-correlation features 査読有り

    Tan Y., Tan J.K., Kim H., Ishikawa S.

    International Journal of Biomedical Soft Computing and Human Sciences   19 ( 1 )   43 - 50   2014年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Kyutacar

  • High speed image registration of head CT and MR images based on Levenberg-Marquardt algorithms 査読有り

    Kisaki M., Yamamura Y., Kim H., Tan J., Ishikawa S., Yamamoto A.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1481 - 1485   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. A medical image fusion technique provides insight into lesion diagnosis and spatial relationships among anatomical structures. In radiotherapy planning for CyberKnife treatment, for instance, the accurate area being irradiated is determined and calculated based on a fusion image of CT and MR images. Hence the risk of the normal tissues being irradiated could be brought down. However, commonly used mouse-based registration yields variations in the accuracy of the results due to manual operations. In addition, there are some pending issues such as the increased physical burdens to the operators and the requirement for a substantial investment of time. Therefore, in recent years many researches have attempted to automate the registration for multimodality images. In this paper, a CAD (Computer Aided Diagnosis) system is developed to assist radiotherapy planning for CyberKnife treatment. We propose a new automatic image registration technique for image fusion of head CT (Computed Tomography) and MR (Magnetic resonance) images. In our method, the minimization of ratio image uniformity and the maximization of normalized mutual information on VOIs (Volumes Of Interest) are performed based on Levenberg-Marquardt algorithm. We evaluated our proposed method by 5 clinical cases and discussed the accuracy of the registration results with computational time.

    DOI: 10.1109/SCIS-ISIS.2014.7044694

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84959398969&origin=inward

  • Three-dimensional non-rigid registration of thoracic CT image based on finite element method 査読有り

    Yamada S., Ikeda Y., Maeda S., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1369 - 1373   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. Developing of computer aided diagnosis (CAD) system which analyzes the medical image by a computer and provides the doctor as a 'second opinion' the result is becoming increasingly popular. Temporal subtraction technique which is one of the CAD systems supports the observation by performing the differential operation of the current image and the previous one of the same patient and emphasizing temporal changes. Process of registration between images exist in this technique, this accuracy is greatly affects to the temporal difference image. However, registration method adaptable to a variety of images with high accuracy has not been established yet. So in this paper, we propose a three-dimensional non-rigid image registration method generic and high precision based on finite element method (FEM). In the proposed method, in order to improve the high cost is the biggest problem of the FEM, introducing the feature point detection in generating a tetrahedral mesh model. Furthermore, we introduce global matching to achieve an improvement of the execution efficiency. In this paper, we verify the effectiveness by the result applying the model data and 3D thoracic CT Images the proposed method.

    DOI: 10.1109/SCIS-ISIS.2014.7044691

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84946530780&origin=inward

  • Person tracking on Kinect images using particle filter 査読有り

    Yoshida A., Kim H., Tan J., Ishikawa S.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1486 - 1489   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. In recent years, technique that estimates the movement of people from images has been studied actively. It is a technique to recognize and understand the intentions of human behaviors, using the results of sensing the state of a person by the time. It is widely applied for example to recognize gesture command of user or to detect suspicious persons using surveillance camera. Moreover, recent studies using Kinect is thriving. Kinect is a device that performs tracking and posture estimation of persons. However, the system has some problems. One of them is that Kinect's tracking system is vulnerable to occlusion. It is necessary to be improved. In this paper, we perform tracking of a person using a particle filter algorithm. We use a feature quantity obtained by combining the color information and depth information as the criteria of the particle filtering. We perform experiments at two situations. And, the results of the proposed method are compared with the results of the conventional method [6] . In certain circumstances, the results indicate that our proposed method is more accurate than the conventional method.

    DOI: 10.1109/SCIS-ISIS.2014.7044772

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84946531166&origin=inward

  • An image registration method for head CTA and MRA images using mutual inforamtion on volumes of interest 査読有り

    Yamamura Y., Kim H., Tan J., Ishikawa S., Yamamoto A.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1490 - 1493   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. Image registration is an important and a fundamental task in computer vision and image processing field. For example, to make a surgical plan for head operation, the surgeons should gain more detailed information from CT angiography (CTA) and MR angiography (MRA) images. And the abnormalities can be easily detected from the fusion image which is obtained from two different modalities. One of the multiple modal image registration methods is matching the CTA and MRA, by which the image of head vascular could be enhanced. In general, the procedure for fusion is completed manually. It is time-consuming and subjective. Particularly the anatomical knowledge is required as well. Therefore, the development of automatic registration methods is expected in medical fields. In this paper, we propose a method for high accurate registration, which concentrates the structure of head vascular. We use 2-D projection images and restrict volume of interests to improve the processing affection. In experiments, we performed our proposed method for registration on five sets of CTA and MRA images and a better result from our previous method is obtained.

    DOI: 10.1109/SCIS-ISIS.2014.7044848

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84988302681&origin=inward

  • Automatic identification of lung candidate nodules on chest CT images based on temporal subtraction images 査読有り

    Tanaka S., Ikeda Y., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T., Tachibana R., Hirano Y., Kido S.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1364 - 1368   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. Lung cancer is the most common cause of death from cancer worldwide. Therefore, for the purpose of early detection of cancer, mass screening and thorough examination have been carried out. Lung cancer is detected easily by using chest MDCT (Multi Detector-row Computed Tomography) images. However, radiologists are apprehended burden by many chest MDCT images which are required interpretation of radiograms. So the CAD (Computer Aided Diagnosis) systems that could relieve radiologist's stress and diagnose accuracy could be improved are expected. One of the CAD systems, temporal subtraction technique that emphasized time-dependent change is reported. This technique is used for diagnosis assistance of detected candidate nodules from CT images. In this paper, the candidate nodules under 20[mm] are extracted from temporal subtraction images. We highlighted the candidate nodules based on features analysis of images. We applied proposed method to 31 cases of chest MDCT images in which the number of nodules was more than one. We got a result of TPR:96.9[%], FPR:6.45[/case].

    DOI: 10.1109/SCIS-ISIS.2014.7044780

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84946531131&origin=inward

  • Automatic detection of GGO regions on CT images in LIDC dataset based on statistical features 査読有り

    Yokota K., Maeda S., Kim H., Tan J., Ishikawa S., Tachibana R., Hirano Y., Kido S.

    2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014   1374 - 1377   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 IEEE. Detection of pulmonary nodules with ground glass opacity (GGO) is a difficult task in radiology. Follow up is often required in medical fields. But diagnosis based on CT images are dependent on ability and experience of radiologists. In addition to that, enormous number of images increase their burden. So, to improve the detection accuracy and to reduce the burden of doctors, a CAD (Computer Aided Diagnosis) system is expected. So, in this paper, we propose an automatic algorithm for GGO detection on CT images. At first, vessel areas are removed from original CT images by using 3D Line Filter and then candidate regions are detected by threshold processing. After that, we calculate statistical features of segmented candidate regions and use artificial neural network (ANN) to distinguish final candidate regions. We applied the proposed method to 31 CT image sets in the Lung Image Database Consortium (LIDC) which is supplied by National Center Institute (NCI). In this paper, we show the experimental results and give discussions.

    DOI: 10.1109/SCIS-ISIS.2014.7044692

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84946531517&origin=inward

  • Detection of a bicycle in video images using MSC-HOG feature 査読有り

    Jung H., Ehara Y., Tan J., Kim H., Ishikawa S.

    International Journal of Innovative Computing, Information and Control   10 ( 2 )   521 - 533   2014年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Traffic accidents are decreasing due to the influence of technology advancement; however accidents still occur due to carelessness of drivers. Therefore, many researchers have been studying how to realize an advanced safety system. The Histograms of Oriented Gradients (HOG) feature is well known as a useful method of detecting a standing human in various kinds of backgrounds. Unlike a person, a bicycle can appear differently from various angles. In this paper, we propose a method of detecting a bicycle on the road using improved HOG feature named MSC-HOG feature and the Real-AdaBoost algorithm. Experimental results and evaluation show satisfactory performance of the proposed method. © 2014 ISSN 1349-4198.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84893415119&origin=inward

  • Automatic segmentation of phalanges regions on CR images based on MSGVF Snakes 査読有り

    Kajihara S., Murakami S., Kim H., Tan J., Ishikawa S.

    International Conference on Control, Automation and Systems   1290 - 1293   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2014 Institute of Control, Robotics and Systems (ICROS). Rheumatoid arthritis and osteoporosis are two common orthopedic diseases. Rheumatoid arthritis is a disease that inflammation occurs in the joint, which always causes the joints are able to move freely. Osteoporosis is a disease that bone mineral content is reduced and risk of fragility fracture increases. As one of the diagnostic methods, medical imaging by photographed CR equipment has been widely accepted. However, some problems such as mass screening data sets and mis-diagnosis are still remained in visual screening. In order to solve these problems and reduce the burden to physicians, needs of an automatic diagnosis system capable of performing quantitative analysis is anticipated. In this paper, we carry out the development of a segmentation method of phalanges regions from CR images of the hand to perform a quantitative evaluation of rheumatoid arthritis and osteoporosis. The proposed method is carried out crude segmentation of phalanges regions from CR images of the hand, and extracts the detailed phalanges regions by Multi Scale Gradient Vector Flow Snakes (MSGVF) method. In our study, we performed Snakes algorithm to give an initial control points on MSGVF algorithm. We applied our method on three pairs of CR temporal images of phalanges regions, which are called as the previous images and the current images. We got the segmentation results of 5.95 [%] of false-positive rate and 92.9 [%] of true-positive rate.

    DOI: 10.1109/ICCAS.2014.6987755

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84920195728&origin=inward

  • Fast motion detection in a dynamic background 査読有り

    Mudjirahardjo, P., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of International Symposium on Artificial Life and Robotics   896 - 900   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2014年01月22日  -  2014年01月24日

  • A method of detecting a bicycle using circles 査読有り

    Jung, H., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of International Symposium on Artificial Life and Robotics   365 - 368   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2014年01月22日  -  2014年01月24日

  • Visualizing scaler stroke motion by a camera-computer system 査読有り

    Kuroiwa, M., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of International Symposium on Artificial Life and Robotics   196 - 199   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Beppu   2014年01月22日  -  2014年01月24日

  • Comparison of feature detectors for obstacles detection 査読有り

    Qian, S., Tan, J. K., Kim, H. S., Ishikawa, S., Morie, T., Shinomiya, T.

    Proceedings of International Conference on Artificial Life & Robotics (ICAROB2014)   GS5-5,1 - GS5-5,4   CD-Rom: 4 Pages   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Ooita   2014年01月11日  -  2014年01月13日

  • Recognizing a road environment using multiple-window bag of features 査読有り

    Morita, S., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings of International Conference on Artificial Life & Robotics (ICAROB2014)   GS5-1,1 - GS5-1,4   CD-Rom: 4 Pages   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Ooita   2014年01月11日  -  2014年01月13日

    Kyutacar

  • ブースティングによる機械学習に基づく海底物体の検出 査読有り

    丹 康弘,タン ジュークイ,金 亨燮,石川聖二

    日本船舶海洋工学会論文集   18   115 - 121   2013年12月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    DOI: 10.2534/jjasnaoe.18.115

    Kyutacar

    CiNii Article

  • A novel pedestrian detector on low-resolution images: Gradient LBP using patterns of oriented edges 査読有り

    Boudissa, A., Tan, J.K., Kim, H., Shinomiya, T., Ishikawa, S.

    IEICE Transactions on Information & Systems   E96-D ( 12 )   2882 - 2887   2013年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1587/transinf.E96.D.2882

    Scopus

    CiNii Article

  • Automatic classification of the seabed sediments based on HLAC 査読有り

    Tan, Y., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings 2013 IEEE/SICE Int. Sympo. on System Integration (SII2013)   653 - 658   2013年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kobe   2013年12月15日  -  2013年12月17日

  • Recognizing human actions using histogram of local binary patterns 査読有り

    Sk, Md. M. Ahsan, J.K. Tan, H. Kim, S. Ishikawa

    Proceedings 2013 IEEE/SICE Int. Sympo. on System Integration (SII2013)   54 - 59   2013年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kobe   2013年12月15日  -  2013年12月17日

  • 3-D recovery of a non-rigid object from a single camera view employing multiple coordinates representation 査読有り

    Ishikawa, Sh., Tan, J.K., Kim, H., Ishikawa, S.

    Proceedings 2013 2nd IAPR Asian Conf. on Pattern Recognition: Recent Advances in Computer Vision and Pattern Recognition (RACVPR)   946 - 950   2013年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Naha   2013年11月05日  -  2013年11月08日

    DOI: 10.1109/ACPR.2013.174

    Scopus

  • Semantic characteristics prediction of pulmonary nodule using Artificial Neural Networks 査読有り

    Li G., Kim H., Tan J., Ishikawa S., Hirano Y., Kido S., Tachibana R.

    Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS   5465 - 5468   2013年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Since it is difficult to choose which computer calculated features are effective to predict the malignancy of pulmonary nodules, in this study, we add a semantic-level of Artificial Neural Networks (ANNs) structure to improve intuition of features selection. The works of this study include two: 1) seeking the relationships between computer-calculated features and medical semantic concepts which could be understood by human; 2) providing an objective assessment method to predict the malignancy from semantic characteristics. We used 60 thoracic CT scans collected from the Lung Image Database Consortium (LIDC) database, in which the suspicious lesions had been delineated and annotated by 4 radiologists independently. Corresponding to the two works of this study, correlation analysis experiment and agreement experiment were performed separately © 2013 IEEE.

    DOI: 10.1109/EMBC.2013.6610786

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84886486763&origin=inward

  • Road region estimation and obstacles extraction using a monocular camera 査読有り

    Qian, S., Tan, J.K., Kim, H., Ishikawa, S., Morie, T., Shinomiya, T.

    International Journal of Innovative Computing, Information and Control   9 ( 9 )   3561 - 3572   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Detection of underwater objects based on machine learning 査読有り

    Tan, Y., Tan, J. K., Kim, H. S., Ishikawa, S.

    Proceedings of SICE Annual Conference 2013   2104 - 2109   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Nagoya   2013年09月14日  -  2013年09月17日

  • Development of a scaler stroke display system using computer vision 査読有り

    Kuroiwa, M., Tan, J. K., Kim, H. S., Ishikawa, S.

    Proceedings of SICE Annual Conference 2013   1722 - 1723   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Nagoya   2013年09月14日  -  2013年09月17日

  • Classifying 2D and 3D objects on a road employing the road plane 査読有り

    Qian, S., Tan, J. K., Kim, H. S., Ishikawa, S., Morie, T., Shinomiya, T.

    Proceedings of SICE Annual Conference 2013   1689 - 1692   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Nagoya   2013年09月14日  -  2013年09月17日

  • Abnormal motion detection in an occlusion environment 査読有り

    Mudjirahardjo, P., Tan, J. K., Kim, H. S., Ishikawa, S.

    Proceedings of SICE Annual Conference 2013   1398 - 1402   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Nagoya   2013年09月14日  -  2013年09月17日

    Kyutacar

  • A global-local approach to saliency detection 査読有り

    Boudissa, A., Tan, J.K., Kim, H., Ishikawa, S., Shinomiya T., Mikolajczyk, K.

    Computer Analysis of Images and Patterns Lecture Notes in Computer Science: Proc. of Int. Conf. on Computer Analysis of Images and Patterns (CAIP 2013)   332 - 337   Computer Analysis of Images and Patterns Lecture Notes in Computer Science(Springer) Volume8048   2013年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    UK   York   2013年08月27日  -  2013年08月29日

    DOI: 10.1007/978-3-642-40246-3_41

    Scopus

  • A novel saliency measure using combined spatial redundancy and local appearance 査読有り

    Boudissa A., Tan J. K., Kim H., Ishikawa S.,Shinomiya, T

    Proc. of IAPR Int. Conf. on Machine Vision Applications (MVA2013)   395 - 398   2013年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Kyoto   2013年05月21日  -  2013年05月23日

  • 3D organic shape correspondence using spherical conformal mapping 査読有り

    Li G., Kim H., Tan J., Ishikawa S.

    IFMBE Proceedings   39 IFMBE   943 - 946   2013年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Points Distribution Model (PDM) has been established as one of the robust tool for medical image segmentation and shape analysis. A main challenge to apply PDM is model correspondence across a set of training samples primarily. In this study, we propose a direct correspondence method based on spherical conformal mapping theory. Firstly, map the genus zero surface to a spherical domain. Constrain a group of Möbius transformations by three selected datum points on surface. In this way, the attitudes of all training samples are corresponded. Then place the landmarks across the whole training set automatically according to the reference model. To require the statistic data, the distribution of landmarks should be accounted in a uniform framework. We align these three datum points using an analytic method. Our proposed method was performed on the correspondence of 4 cases of left lung training samples. Compared with the manual correspondence results, the mean error of Euclidean distance for each landmark was 12mm. © 2013 Springer-Verlag.

    DOI: 10.1007/978-3-642-29305-4_247

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84876020662&origin=inward

  • Recognizing human actions using histogram of local binary patterns 査読有り

    Ahsan S., Tan J., Kim H., Ishikawa S.

    2013 IEEE/SICE International Symposium on System Integration, SII 2013   54 - 59   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Human action recognition from video clips has become an active research field in recent years. Each action has its unique shape and a motion sequence can be suitably represented by a histogram. In this paper a histogram based action recognition method is presented. Motion history images are a good spatiotemporal template for action representation. In the present method, we use local binary patterns of directional motion history images for the histogram representation. We measured the performance of the proposed method along with some variants of it by employing KTH action dataset and found higher accuracy. The presented results also justify the superiority of the proposed method compared to other approaches for action recognition found in literature. © 2013 IEEE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84902492748&origin=inward

  • Development of a scaler stroke display system using computer vision 査読有り

    Kuroiwa M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1722 - 1723   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper describes a dental trainings imulation sys tem with a jaw mode l using a computer vis ion technique for scaling and root planing (SRP) . The SRP is a cornerstone of treatment of periodontal diseases. We have difficulty in watching the movement of the working end of a scaler in a mouth. We propose a method of recovering the movement of the scaler employing a marker attached to the scaler. The performance of the proposed method is shown experimentally.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84888617888&origin=inward

  • Detection of underwater objects based on machine learning 査読有り

    Tan Y., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   2104 - 2109   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Side-scan and forward-looking sonars are some of the most widely used imaging systems for obtaining large scale images of the seafloor, and their use continues to expand rapidly with their increased deployment on autonomous underwater vehicles. However, it is difficult to extract quantitative information from the images generated from these processes, particularly for the detection and extraction of information on the objects within these images. We propose in this paper an algorithm for automatic detection of underwater objects in side-scan images based on machine learning employing adaptive boosting. Experimental results show that the method produces consistent maps of the seafloor.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84888617459&origin=inward

  • Detecting foreground objects by sequential background inference in a video captured by a moving camera 査読有り

    Setyawan F., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1699 - 1702   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a technique to separate foreground objects from the background in a video taken with a moving camera. In this technique, the first image frame is considered as the first background. The next image frame is an image containing foreground objects. The next image frame is conditioned to have the same position with the first background. Adjustment of the position between the two image frames is done by determining the image features of the two images using Harris corner detector method. After image features have been obtained, feature point correspondence between the first background image and a subsequent image is searched for using Lucas-Kanade tracker. Outlier pairs are discarded by RANSAC. By the employment of a set of feature point pairs, the 2D projective transform is computed between the two images. Furthermore it is determined whether a pixel is included in the foreground or the background. Experimental results show satisfactory performance of the proposed method.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84888592154&origin=inward

  • Classifying 2D and 3D objects on a road employing the road plane 査読有り

    Qian S., Tan J., Kim H., Ishikawa S., Morie T., Shinomiya T.

    Proceedings of the SICE Annual Conference   1689 - 1692   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Obstacles detection is an important technology in autonomous collision avoidance systems. We have already proposed an obstacle detection method based on background modeling. But this method detects 2D and 3D objects simultaneously. Since these 2D objects are not dangerous to driving, they will reduce the accuracy of detection if they are detected as obstacles. In order not to detect these 2D objects, this paper proposes a method for classifying 2D objects and 3D objects. The proposed method first estimates the camera motion parameters from the correspondences of feature points between two successive images. We calculate the 3D positions of the feature points on a detected object in the world coordinate system using triangulation. Then we estimate the parameters of the road plane using 3D positions of those feature points. Finally we calculate the distances from the 3D positions of the feature points to the road plane. Based on these distances, we can classify 2D objects and 3D objects. Experimental results show satisfactory performance of the proposed method.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84888611260&origin=inward

  • Automatic classification of seabed sediments based on HLAC 査読有り

    Tan Y., Tan J., Kim H., Ishikawa S.

    2013 IEEE/SICE International Symposium on System Integration, SII 2013   653 - 658   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Understanding the distribution of seafloor sediment using a side-scan sonar is very important to grasp the distribution of seabed resources. This task is traditionally carried out by a skilled human operator. However, with the appearance of Autonomous Underwater Vehicles, automated processing is now needed to tackle the large amount of data produced and to enable on the fly adaptation of the missions and near real time update of the operator. We propose in this paper a method that applies a subspace method and higher-order local auto-correlation feature to the acoustic image provided by the side-scan sonar to classify seabed sediment automatically. In texture classification, the proposed method outperformed other methods such as gray level co-occurrence matrix and Local Binary Pattern operator. Experimental results show that the proposed method produces a consistent map of a seafloor. © 2013 IEEE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84902467004&origin=inward

  • Abnormal motion detection in an occlusive environment 査読有り

    Mudjirahardjo P., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1398 - 1402   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    We present a motion classification approach to detect movements of interest (abnormal motion) based on optical flow. By tracking all feature points of a moving human in successive frames, we calculate the coordinate space and create feature space. This is done directly from the intensity information without explicitly computing the underlying motions. It requires no foreground segmentation, no prior learning of activities, no motion recognition and no object detection. First, we determine the abnormal scene and speed by using the velocity histogram. Then by using k-means clustering over velocity orientation and magnitude, we determine the abnormal direction. The performance of the proposed method is experimentally shown.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84888608440&origin=inward

  • A parameterization based correspondence method for PDM building 査読有り

    Li G., Kim H., Tan J., Ishikawa S.

    Journal of Advanced Computational Intelligence and Intelligent Informatics   17 ( 1 )   18 - 26   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Place-march of corresponding landmarks is one of the major factors influencing 3D Points Distribution Model (PDM) quality. In this study, we propose a semi-automatic correspondence method based on surface parameterization theory. All the training sets are mapped into a spherical domain previously. The rotation transformation of training samples is regarded as spherical rotation of their maps. We solve it by comparing the density distribution of surface map of training sample with respect to the reference model. Simultaneously, the corresponding landmarks across the whole training set are marketed depending on the spherical coordinates on parameter domain. In this paper, we also compared the corresponding results with two constraint conditions of spherical conformal mapping: 3 datum points constrain and zero-mass constrain. Experimental results are given for left lung training sets of 3D shapes. The mean result with the 3 datum points constraint and the zero mass-center constraint was 21.65 mm and 20.19 mm respectively.

    DOI: 10.20965/jaciii.2013.p0018

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84879455289&origin=inward

  • Detection of artery regions in lower extremity arteries from non-enhanced MR imaging based on particle filter algorithms 査読有り

    Koga Y., Yamamoto A., Kim H., Tan J., Ishikawa S.

    Journal of Advanced Computational Intelligence and Intelligent Informatics   17 ( 2 )   318 - 323   2013年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Recently, the arteries sclerosis obliterans (ASO) or called peripheral arterial disease (PAD) typically caused by chronic ischemia of limbs increases remarkably. As one of the diagnosis methods, the image diagnosis methods such as MR image are applied in medical fields. In this paper, we propose a vascular extraction method using fresh blood imaging (FBI) method, as well as apply it to computer aided diagnosis (CAD) system. Especially, to prevent the spread outside of the region and improve the segment accuracy of peripheral artery areas, we introduce particle filter algorithms. We performed our method on automatic artery regions detection using non-enhanced MR images. Furthermore, we compared the extracted results to gold standard data and analyzed accuracy by receiver operating characteristic (ROC). The effectiveness of our proposed method and satisfactory of its detected accuracy were confirmed.

    DOI: 10.20965/jaciii.2013.p0318

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84879329594&origin=inward

  • Temporal subtraction method for abdominal contrast and non-contrast image based on image matching techniques 査読有り

    Minashima M., Ogihara S., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T.

    International Conference on Control, Automation and Systems   1805 - 1808   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, the death rate due to the liver cancer rises remarkably. To reduce the rate, the early detection of the disease is important. To detect diseases in early stage which are concern cancer, image diagnosis such as CT image is used in medical fields. On the other hand, the burden to a radiologist becomes increase. Therefore, the development of a system reducing the burden of the radiologist is important. In order to diagnose abnormalities based on medical imaging there are some reports. But, there is no report which is concern with detecting abnormality on liver disease based on temporal subtraction technique for abdominal CT image. As one of the methods to analyze abnormalities on visual screening, temporal subtraction technique is useful. This technique subtracts past image to current one. To obtain the good performance based temporal subtraction technique, image registration is most important task. In this paper, we propose a registration method for liver CT image using voxel matching techniques. We describe our registration method from two CT image which obtained deference time series and shows experimental results with discussion. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872554258&origin=inward

  • Surface orientation driven 3D rigid registration method 査読有り

    Li G., Kim H., Tan J., Ishikawa S.

    Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics   1418 - 1421   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a new surface-based method to solve the rotation transformation of 3D rigid registration. Different from the criterion that minimize the sum of the squares of the errors between the corresponding parts of objects, here we estimate the spatial modalities of objects by their orientations. In details, Gauss map of surface is introduced to measure the distribution of surface curvature. To depict the characters of surface, we add sign weights to Gauss sphere according to the local total curvature. The stereographic projection is utilized to transform the spherical statistics to bifacial plane. As well, an optimization solution is done by iterated algorithm. We applied our proposed method on lung triangular surface registration. The experiments were completed by mutual information of block statistic. © 2012 IEEE.

    DOI: 10.1109/ICSMC.2012.6377933

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872393612&origin=inward

  • Detection of bone regions on legs from CTA image using concentration gradients 査読有り

    Wada K., Yamamoto A., Kim H., Tan J., Ishikawa S.

    International Conference on Control, Automation and Systems   1818 - 1821   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, radiologists can easily screening with high accuracy based on CTA (Computed Tomography Angiography) images and/or MRI (Magnetic Resonance Imaging) which is supported by medical imaging technology. On the other hand, visual screening makes burden to doctors which cause increasing the images and limited times. To avoid these problems, CAD (Computer Aided Diagnosis or Detection) system is developed by reducing the burdens and to improve the diagnostic accuracy. Problems of diagnosis on legs from CTA are burdens to doctors and detection of blood vessel by manual. In the field for visual screening, it is difficult task to detect blood vessel regions with high accuracy detection rates since blood vessel regions are in contact with bone region on the CTA images. In this paper, we propose a new method for segmentation of arterial area, which are obtained CTA based on image processing by concentration gradients, and satisfactory experimental results are obtained. Some experimental results are shown with discussion. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872535204&origin=inward

  • Applying MSC-HOG feature to the detection of a human on a bicycle 査読有り

    Jung H., Ehara Y., Tan J., Kim H., Ishikawa S.

    International Conference on Control, Automation and Systems   514 - 517   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Traffic accidents are decreasing under the influence of technology advancement. But the problems still remain that accidents occur due to carelessness of drivers. Therefore many researchers have been still studying to realize an advanced safety system. The Histograms of Oriented Gradients (HOG) feature is well known as a useful method of detecting a standing human in various kinds of the background. Unlike a human, a bicycle changes its appearance variously according to viewpoints. Hence, it is more difficult than detecting a human. In this paper, we propose a method of detecting a human on a bicycle using the Multiple-size Cell HOG (MSC-HOG) feature and the RealAdaboost algorithm. Experimental results and evaluation show satisfactory performance of the proposed method. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872579239&origin=inward

  • A camera-computer system to support safe walking of a blind person 査読有り

    Kanayama A., Tan J., Kim H., Ishikawa S.

    International Conference on Control, Automation and Systems   511 - 513   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    It is necessary for a blind person to know the locations of the pedestrians or moving objects around him/her on a walk road for his/her safe walking. This paper proposes a system for detecting pedestrians or moving objects in front of a person on a walk road by a camera mounted on his/her body. The system judges if a moving object in front of the person is going to hit him/her in the near future by analyzing the motion vectors acquired on the camera images. In case there is a fear of hit, the system tells it to the person by sound. In this way, the proposed system helps safe walking of a blind person. Some experimental results are shown with discussion. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872511199&origin=inward

  • 3-D recovery of a non-rigid object from a single camera view by piecewise recovery and synthesis 査読有り

    Ishikawa S., Tan J., Kim H., Ishikawa S.

    Proceedings - International Conference on Pattern Recognition   1443 - 1446   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a novel technique for 3-D recovery of a non-rigid object, such as a human in motion, from a single camera view. To achieve the 3-D recovery, the proposed technique performs segmentation of an object under deformation into respective parts which are regarded as rigid. For high accuracy segmentation, multi-stage learning and local subspace affinity are employed in this stage. Each part recovers its 3-D shape by applying the factorization method to it. This is the initial solution. The shape of each part is then refined by applying a quadratic model to the initial solution. The entire 3-D recovery of the object is finally performed using the common points among the segmented parts. The experiments employing a synthetic non-rigid object and real human motion data show effectiveness of the proposed technique. © 2012 ICPR Org Committee.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84874574364&origin=inward

  • Nonrigid image registration method for thoracic CT images using vessel structure information 査読有り

    Maeda S., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T.

    Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics   1413 - 1417   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Temporal Subtraction Technique is one of effective tools for diagnosis of lung cancer from thoracic CT images. By comparing two images of the same subject but shot at different time, the detection of temporal changing becomes facilitated. To acquire a more accurate subtraction image, the registration of these two images is critical. However it is not easy as the influence of the slight structures such as lung blood vessel. In this paper, a novel nonrigid image registration method based on vessel structure information is proposed. The similarity of the vessels structure is defined by means of the likelihood function of vessels structure and their direction. We combine this similarity of vessels with the intensity information of images. And the metrics are used as similarity measure in registration procedure. The proposed method has been applied to thoracic MDCT images, and the improvement of registration accuracy was investigated. The efficiency of our proposed method was indicated. © 2012 IEEE.

    DOI: 10.1109/ICSMC.2012.6377932

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872395110&origin=inward

  • Classification of lung nodules on temporal subtraction image based on statistical features and improvement of segmentation accuracy 査読有り

    Miyajima T., Tokisa T., Maeda S., Kim H., Tan J., Ishikawa S., Murakami S., Aoki T.

    International Conference on Control, Automation and Systems   1814 - 1817   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, thorax MDCT images are used in visual screening for early detection of lung nodules. Radiologists can easily detect lung nodules on images, but it has enormous images and load of radiologist for visual screening. To reduce the load of radiologist and improve the detection accuracy, a CAD (Computer Aided Diagnosis) system is expected from medical fields. In the medical image processing fields, some related works are reported to develop the CAD system including temporal subtraction technique as helpful technical issues. In this paper, we propose a classification of lung nodules on temporal subtraction image based on image processing technique. At first, the candidate regions including nodules are detected by the multiple threshold technique in terms of the pixel value on the temporal subtraction images. Then, we remove vessel regions on nodules by the most suitable threshold technique and watershed method. Also we remove the false positives which are caused by mis-registration using selective enhancement filter, rule-base method and artificial neural networks. In this paper, we illustrate some experimental result which applied our algorithm to 31 chest MDCT cases including lung nodules. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872550943&origin=inward

  • A temporal subtraction method for thoracic CT images using non rigid warping technique 査読有り

    Tokisa T., Kim H., Tan J., Ishikawa S., Moon Y., Yoon S., Kim W.

    International Conference on Control, Automation and Systems   1809 - 1813   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    A temporal subtraction technique which is subtracted from previous image to current one is introduced as powerful tools in medical fields to diagnose abnormalities. It provided a computer aided diagnosis (CAD) tools on visual screening. Radiologist can detect lesions on image by compare the two images. It is because the subtraction image can enhance the temporal changes, such as shaped of new lesions and/or the temporal changes in existing abnormalities by removing most of the normal background structures by subtraction of a previous image from a current one. There are some technical reports to register the different images until now. But subtraction artifacts are still remained which are caused by mis-registration. In this paper, we propose a new method for temporal subtraction method on thoracic MDCT images using non-rigid image warping techniques based on free form deformation (FFD). We applied our method to two clinical cases of chest CT image sets and compare to conventional methods in terms of computational cost and accuracy. © 2012 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84872541007&origin=inward

  • A surface mapping based alignment method for statistical shape model building 査読有り

    Li G., Kim H., Tan J., Ishikawa S., Yamamoto A.

    Proceedings of the 2012 7th IEEE Conference on Industrial Electronics and Applications, ICIEA 2012   803 - 806   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The fundamental step to get a Statistical Shape Model (SSM) is to align all the training samples to the same spatial modality. In this paper, we propose a new 3D alignment method using surface parameterization theory to solve the rotation transformation of 3D rigid registration. It is a feature based alignment method which matches two models depending on comparing the distribution of spherical conformal map of vertices. Moreover, the stereographic projection is utilized to transform the spherical statistics to bifacial plane. The optimal solution is obtained by an iterated algorithm. We tested the rigid registration of left lung training samples. The availability of our proposed method was confirmed. © 2012 IEEE.

    DOI: 10.1109/ICIEA.2012.6360834

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84871706167&origin=inward

  • A method for cardiac region segmentation on ultrasound images based on particle filter algorithm 査読有り

    Kim H., Sugandi B., Tan J., Ishikawa S.

    Proc. of the IADIS Int. Conf. Computer Graphics, Visualization, Computer Vision and Image Processing 2012, CGVCVIP 2012, Part of the IADIS MCCSIS 2012   157 - 162   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper presents a contour segmentation of cardiac region on ultrasound images viewed as a tracking framework. We develop a tracking model for contour segmentation and estimate the image contour using particle filter algorithm. The tracking model is initialized by projecting equip spaced radii from center point of the contour to the tracking boundary. The motion of each particle on the radii is governed by a system model. The samples likelihood is measured based on the gradient intensity or edge of the contour. Then the estimated contour is measured based on the mean estimate of the samples likelihood. Our proposed method is implemented to the ultrasound images of a cardiac and the satisfactory results are achieved. © 2012 IADIS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84887293035&origin=inward

  • Obstacles extraction from a video taken by a moving camera 査読有り

    Qian S., Tan J. K., Kim H., Ishikawa S.

    Proceedings of International Conference on Connected Vehicles (ICCVE2012)   USB:1 - USB:6   USB: 6Pages   2012年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Beijing   2012年12月12日  -  2012年12月16日

    DOI: 10.1109/ICCVE.2012.59

    Scopus

  • Three-dimensional recovery of buildings environment under Manhattan-world constraint 査読有り

    Ishikawa S., Tan J. K., Kim H., Ishikawa S.

    Proc. of the IIEEJ Image Electronics and Visual Computing Workshop 2012   USB:1 - USB:4   USB:4Pages   2012年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Malaysia   Kuching   2012年11月21日  -  2012年11月24日

    Kyutacar

  • Human detection from an image employing reduced number of weak-classifiers 査読有り

    Ehara Y., Tan J. K., Kim H., Ishikawa S.

    Proc. of the IIEEJ Image Electronics and Visual Computing Workshop 2012   USB:1 - USB:5   USB:5Pages   2012年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Malaysia   Kuching   2012年11月21日  -  2012年11月24日

  • Arterial hemodynamic analysis on non-enhanced magnetic resonance angiogram using optical flow 査読有り

    Yamamoto A., Kim H., Tan J., Ishikawa S.

    Artificial Life and Robotics   17 ( 1 )   102 - 106   2012年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © ISAROB 2012. Peripheral arterial disease (PAD) is one of the reasons caused to the lower extremity atherosclerotic disease. Its diagnosis is needed to obtain much kind of the information of vascular morphology as well as the blood flow information based on hemodynamics. The diagnosis of the PAD using magnetic resonance imaging (MRI) equipment without contrast medium is available as a useful visual screening in clinical practice. In this paper, we propose a novel method for visualizing hemodynamics to arterial images obtained by a non-contrast enhanced magnetic resonance angiography (MRA) based on the Lucas–Kanade optical flow with the image pyramid processing, and satisfied experimental results are obtained.

    DOI: 10.1007/s10015-012-0022-8

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84863670045&origin=inward

  • A camera-computer system to support safe walking of a blind person 査読有り

    Kanayama A., Tan J. K., Kim H., Ishikawa S.

    12th International Conference on Control, Automation and Systems (ICCAS)   511 - 513   2012年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2012年10月19日  -  2012年10月21日

  • Applying MSC-HOG feature to the detection of a human on a bicycle 査読有り

    Jung H., Tan J. K., Kim H., Ishikawa S.

    12th International Conference on Control, Automation and Systems (ICCAS)   514 - 517   2012年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2012年10月19日  -  2012年10月21日

  • A simple pedestrian detection using LBP-based patterns of oriented edges 査読有り

    Boudissa A., Tan J. K., Kim H., Ishikawa S.

    Proc. of Int. Conf. on Image Processing (ICIP2012)   469 - 472   2012年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    USA   Florida   2012年09月30日  -  2012年10月03日

    DOI: 10.1109/ICIP.2012.6466898

    Scopus

  • Human motion recognition employing infrared radiation camera images 査読有り

    Hiroshima T., Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2012   392 - 395   2012年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Akita   2012年08月21日  -  2012年08月23日

  • Japanese finger-spelling recognition using a chest-mounted camera 査読有り

    Nagasue A., Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2012   909 - 912   2012年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Akita   2012年08月21日  -  2012年08月23日

  • An effective directional motion database organization for human motion recognition 査読有り

    S. M. Ashik Eftakhar, Joo Kooi Tan, Hyoungseop Kim,Seiji Ishikawa

    International Journal of Innovative Computing, Information and Control   8 ( 2 )   1359 - 1370   2012年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Arterial Hemodynamic Analysis on Non-enhanced Magnetic Resonance Angiogram Using Optical Flow 査読有り

    Yamamoto, Kim, Tan, Ishikawa

    The 17th International Symposium on Artificial Life and Robotics   926 - 929   2012年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2012年01月19日  -  2012年01月20日

  • Japanese finger-spelling recognition using a chest-mounted camera 査読有り

    Nagasue A., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   909 - 912   2012年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a technique for recognizing Japanese finger-spelling using a sign language user's chest-mounted camera. Unlike existent systems, the technique employs a chest-mounted camera attached to a sign language user himself/herself and recognizes his/her sign language through the captured images of Japanese finger-spelling. We use a hand area picture of his/her hand and the MHIs (Motion History Images) for the Japanese finger-spelling recognition. For the recognition method, we employ the ICA (Independent Component Analysis). Furthermore, in order to recognize the Japanese finger-spelling from an animation (an image sequence), a character segmentation technique is also proposed. The performance of the proposed system is shown experimentally. © 2012 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84869417788&origin=inward

  • Human motion recognition employing infrared radiation camera images 査読有り

    Hiroshima T., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   392 - 395   2012年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a method of motion recognition in real night environment employing an infrared radiation camera. We record a video employing an infrared radiation camera to perform motion recognition at night. We use Directional Motion History Images (DMHIs) and Directional Motion Energy Images (DMEIs) for human motion representation. For the recognition of the motions, we employ the eigenspace method. In the experiment, it was confirmed that the human motion recognition at night is possible. © 2012 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84869392576&origin=inward

  • Japanese finger-spelling recognition using a chest-mounted camera 査読有り

    Nagasue A., Tan J. K., Kim H., Ishikawa S.

    Proc. of the 17th Int. Sympo. on Artificial Life and Robotics   1059 - 1062   2012年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Oita   Beppu   2011年01月19日  -  2012年01月21日

  • Human detection employing the HOG feature based on multiple scale cells 査読有り

    Ehara Y., Tan J. K., Ishikawa S., Morie T.

    Proc. of the 17th Int. Sympo. on Artificial Life and Robotics   1047 - 1050   2012年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Oita   Beppu   2011年01月19日  -  2012年01月21日

  • Obstacle detection using a moving camera 査読有り

    Qian S., Tan J., Ishikawa S., Morie T.

    Proceedings of the 16th International Symposium on Artificial Life and Robotics, AROB 16th'11   767 - 770   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a method of detecting obstacles from a video taken by a moving camera mounted on a vehicle by background subtraction. The background subtraction is often used to detect moving objects when camera is static. But according to the characteristics of a road, we can also employ Gaussian mixture model to detect all objects (either static or moving objects) on the road in the case of moving camera. Then we use two consecutive image frames, and warps the first image according to the geometrical relationship between these two images. The road area is then extracted by comparing the warped image with the second image. Using this road area, we can delete all things which are not obstacles. In the performed experiments, it is shown that the proposed method is able to detect obstacles such as vehicles and pedestrians on a road. © 2011 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866717620&origin=inward

  • Moving objects detection at an intersection by sequential background extraction 査読有り

    Sonoda S., Tan J., Kim H., Ishikawa S., Morie T.

    International Conference on Control, Automation and Systems   1752 - 1755   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, there are many traffic accidents in turning right at an intersection. They are mainly caused by a driver's oversight of pedestrians and motorcycles that are occluded by oncoming cars. Therefore a system is necessary to detect moving objects such as oncoming cars and pedestrians at an intersection, and warn a vehicle driver. This paper describes a technique for detecting moving objects in turning right at an intersection when vehicle is stopping. Moving objects are detected by Mixture of Gaussians (MoG). In addition, we distinguish cars from pedestrians using the difference of the area size and the aspect ratio of detected objects. The object which is classified as a pedestrian is tracked using Lucas-Kanade Tracker. If the detected cars and pedestrians overlap or a car completely obscures pedestrians, we perform the estimation of pedestrian's location by using the information on past frames. By doing this, it is possible to detect pedestrians that drivers are actually difficult to see. The performance of the proposed technique was examined employing car videos and satisfactory results were obtained. © 2011 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84863041517&origin=inward

  • Extraction of individual pedestrians employing stereo camera images 査読有り

    Kawabe M., Tan J., Kim H., Ishikawa S., Morie T.

    International Conference on Control, Automation and Systems   1744 - 1747   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    A traffic accident occurs mainly because of carelessness of a driver. To prevent such an accident, many systems which detect pedestrians from vehicle are proposed. But most of these systems give a warning to a driver every time when a pedestrian is detected. The effect of warning becomes lower if the frequency of warning increases in a busy area. The objective of this study is to develop a technique for detecting pedestrians from a vehicle and to arrange them according to their potential risks by analyzing their behaviors. In this paper, we propose a technique for segmenting pedestrians employing stereo camera images to analyze a pedestrian's behavior. First, foreground regions are extracted employing background estimation using the mixture of Gaussian model. After extracting foreground regions in an input image, these regions are segmented by using the distance information. By using this method, each shape of the detected pedestrians can be obtained even when pedestrians are overlapping with each other. The proposed technique was examined experimentally employing real video images and satisfactory results were obtained. © 2011 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84863049839&origin=inward

  • Detection of pedestrians employing a wide-angel camera 査読有り

    Matsuda R., Tan J., Kim H., Ishikawa S.

    International Conference on Control, Automation and Systems   1748 - 1751   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, the number of accidents that pedestrians have law violation is in the tendency of decrease in Japan. However, accidents caused by pedestrians crossing a crosswalk or dashing into a crosswalk still have high ratio, and both accident sources account for 15% of the whole number of accidents caused by a pedestrian. Although many researches in ITS in which pedestrians are detected from in-vehicle cameras have been actively done to solve these problems, they usually employ standard cameras, and those pedestrians who exist outside of the camera view cannot be detected. In this paper, we employ a wide-angle camera which has wider view than a general camera and propose a technique for detecting pedestrians from the wide-angle image. Since, in a wide-angle camera image, every object becomes smaller, we propose a technique for detecting pedestrians employing optical flows converging to a FOE (Focus of Expansion). Experimental results show satisfactory performance of the technique. © 2011 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84863031172&origin=inward

  • Detection of a bicycle and its driving directions using HOG feature 査読有り

    Jung H., Tan J., Kim H., Ishikawa S.

    Proceedings of the 16th International Symposium on Artificial Life and Robotics, AROB 16th'11   781 - 784   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Studies on car vision have currently been practiced around recognizing a human enthusiastically. The Histograms of Oriented Gradients (HOG) feature has been proposed as useful feature for recognizing a human standing in various kinds of background. On the other hand, although a bicycle is important transportation vehicle in urban environment, its automatic recognition or detection is not an easy task for a computer vision system, because bicycle's appearance can change dramatically according to viewpoints and a person riding on the bicycle is a non-rigid object. Thus, automatic bicycle detection is an important research subject in an intelligent perception system using car vision. In this paper, we propose a method of detecting a bicycle and its driving direction using the HOG feature and RealAdaboost algorithm. Experimental results show satisfactory performance of the proposed method. © 2011 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866651335&origin=inward

  • Detecting human flows on a road different from main flows 査読有り

    Park M., Tan J., Nakashima Y., Kim H., Ishikawa S.

    Proceedings of the 16th International Symposium on Artificial Life and Robotics, AROB 16th'11   793 - 796   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Automatic detection of human flows on a road by a computer vision system is of great importance mainly in surveillance systems, where human flows are observed by a camera and a computer analyzes the videos that the camera provides to detect a person having a different flow of movement, such as a person walking toward a certain direction while most of the people walk in the opposite direction, or a person running in a group of walking people. This paper describes a technique for finding a person having a different behavior or motion from others. The idea of the paper is to classify motion flows (or optical flows) extracted from a video into respective groups having respective directions of the motion by analyzing the motion flows. Experimental results show effectiveness of the proposed technique. © 2011 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866643383&origin=inward

  • Applying HOG feature to the detection and tracking of a human on a bicycle 査読有り

    Jung H., Tan J., Ishikawa S., Morie T.

    International Conference on Control, Automation and Systems   1740 - 1743   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Detection of a human on a bicycle is an important research subject in an advanced safety vehicle driving system to decrease traffic accidents. The Histograms of Oriented Gradients (HOG) feature has been proposed as useful feature for detecting a standing human in various kinds of background. So, many researchers use currently the HOG feature to detect a human. Detecting a human on a bicycle is more difficult than detecting a human because a bicycle's appearance can change dramatically according to viewpoints. In this paper, we propose a method of detecting a human on a bicycle using HOG feature and RealAdaboost algorithm. When detecting a human on a bicycle, occlusion is a cause of decreasing detection efficiency. Occlusion is a serious matter in car vision research because there are occlusions in real transportation environment. We decide the next position of a human on a bicycle using object tracking. Experimental results and evaluation show satisfactory performance of the proposed method. © 2011 ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84856534795&origin=inward

  • Automatic segmentation of liver region from non-contrast and contrast CT images employing tree-structural image transformations 査読有り

    Komatsu M., Li G., Kim H., Tan J., Ishikawa S., Yamamoto A.

    Proceedings of the 16th International Symposium on Artificial Life and Robotics, AROB 16th'11   763 - 766   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In the medical image processing field, segmentation from the CT image is one of the most important problems for analyzing the abnormalities and diagnosis on visual screening. Many related segmentation techniques have been developed for automatic extraction of ROI. It is however, there are still no fully automatic segmentation methods that are generally applicable to ROI based on CT image set. In this paper, we present a technique for automatic extraction of liver region on the MDCT images employing automatic construction of tree-structural image transformation (ACTIT). We propose a new technique for extraction of organs employing ACTIT with non-contrast and contrast image set in order to introduce temporal change information. We apply the proposed technique to three abdominal image set and satisfactory segmentation results are achieved. © 2011 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866687359&origin=inward

  • A 3D Matching Method for Organic Training Samples Alignment Based on Surface Curvature Distribution 査読有り

    Li, Kim, Tan, Ishikawa, Yamamoto

    Open Journal of Medical Imaging   1 ( 2 )   43 - 47   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Improvement of a structured motion database for high accuracy human motion recognition 査読有り

    S.M.Ashik Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    International Journal of Biomedical Softe Computing,and Human Sciences   17 ( 1 )   1 - 10   2011年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Direction-oriented human motion recognition with prior estimation of directions 査読有り

    Eftakhar S. M. A., Tan J. K., Kim H., Ishikawa S.

    Proc. IEEE Int. Conf of Industrial Electronics Society (IECON)   4081 - 4086   2011年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Australia   Melbourne   2011年11月07日  -  2011年11月10日

    DOI: 10.1109/IECON.2011.6120002

    Scopus

  • A non-rigid alignment method for triangular mesh surface of lung field 査読有り

    Li, Kim, Tan. Ishikawa, Yamamoto

    2011 International Workshop on Smart Info-Media Systems in Asia   150 - 153   2011年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Nagasaki   2011年10月31日  -  2011年11月

  • Moving objects detection at an intersection by sequential background extraction 査読有り

    Sonoda S., Tan J. K.,Kim H., Ishikawa S., Morie T.

    Proc. of 2011 11th Int. Conf. on Control, Automation and Systems   1752 - 1755   2011年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Gyeonggi-do   2011年10月26日  -  2011年10月29日

  • Extraction of individual pedestrians employing stereo camera images 査読有り

    Kawabe M., Tan J. K.,Kim H., Ishikawa S., Morie T.

    Proc. of 2011 11th Int. Conf. on Control, Automation and Systems   1744 - 1747   2011年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Gyeonggi-do   2011年10月26日  -  2011年10月29日

    Student Paper Award Finalist

  • Detection of pedestrians employing a wide-angle camera 査読有り

    Matsuda R., Tan J. K., Ishikawa S., Morie T.

    Proc. of 2011 11th Int. Conf. on Control, Automation and Systems   1748 - 1751   2011年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Gyeonggi-do   2011年10月26日  -  2011年10月29日

  • Applying HOG feature to the detection and tracking of a human on a bicycle 査読有り

    Jung H., Tan J. K., Ishikawa S., Morie T.

    Proc. of 2011 11th Int. Conf. on Control, Automation and Systems   1740 - 1743   2011年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Gyeonggi-do   2011年10月26日  -  2011年10月29日

  • 階層的画像位置合わせによる胸部マルチスライスCT画像の経時的差分処理の高速化 査読有り

    前田,金,タン,石川,山本

    バイオメディカル・ファジィ・システム学会誌   13 ( 2 )   1 - 7   2011年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Action Dataset – A Survey 査読有り

    Ahad Md A. R,Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   1650 - 1655   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

  • Improved approach for action recognition based on local and global features 査読有り

    Ahad Md A. R,Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   1645 - 1649   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

  • Multiple persons' action recognition by fast human detection 査読有り

    Eftakhar S. M. A., Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   1639 - 1644   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

    2011年度計測自動制御学会 学術奨励賞技術奨励賞

  • 3-D recovery of a non-rigid object from a single camera view 査読有り

    Ishikawa Shota, Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   447 - 450   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

  • 3-D modeling of dynamic remote environments employing the images from cell-phone cameras and a communication network 査読有り

    Ohyama Y., Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   48 - 51   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

  • Buildings Recovery Employing Manhattan-world Constraint 査読有り

    Ohyama Y., Tan J. K., Kim H., Ishikawa S.

    Proc. SICE Annual Conf. 2011   431 - 434   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月13日  -  2011年09月18日

  • Detecting a human body direction using multiple-HOG 査読有り

    Nakashima Y., Tan J. K., Ishikawa S., Morie T.

    Proc. First Int. Sympo. on Future Active Safety Technology toward Zero-traffic-accident   1 - 6   CD-Rom   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2011年09月05日  -  2011年09月09日

  • A Shape-Based Segmentation Method of Multi-organs on CT Images Using Genetic Algorithm 査読有り

    Li, Takahashi, Kim, Tan, Ishikawa,Yamamoto

    ICIC Express Letters   5 ( 9(A) )   3189 - 3194   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic Detection of Phalangeal Region on CR Images Using Active Contour Model 査読有り

    Yamakawa, Murakami, Kim, Tan, Ishikawa, Aoki

    The 12th International Symposiumm on Advanced Intelligent Systems   362 - 365   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • Detection of Calcification on Carotid Artery in Dental CT Image 査読有り

    Shimizu, Kim, Tan, Ishikawa, Tanaka, Kitou, Morimoto

    The 12th International Symposiumm on Advanced Intelligent Systems   358 - 361   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • Segmentation Method for Mandibular Region in Dental CT Images Using MPR Image Information 査読有り

    Kuroki, Kim, Tan, Ishikawa, Tanaka, Kitou, Morimoto

    The 12th International Symposiumm on Advanced Intelligent Systems   63 - 66   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • Detection of Artery Regions on Fresh Blood Imaging Using Particle Filter Algorithms 査読有り

    Koga, Yamamoto, Kim, Tan, Ishikawa

    The 12th International Symposiumm on Advanced Intelligent Systems   59 - 62   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • A Method for Image Registration on Head CT and MR Image by Using Real-coded Genetic Algorithm 査読有り

    Hayata, Yamamura, Kim, Tan, Ishikawa, Yamamoto

    The 12th International Symposiumm on Advanced Intelligent Systems   55 - 58   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • Automatic Detection Method for Candidate Regions of Lung Nodule from the Temporal Subtraction Images 査読有り

    Tokisa, Miyake, Maeda, Kim, Tan, Ishikawa, Murakami, Aoki

    The 12th International Symposiumm on Advanced Intelligent Systems   51 - 54   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • Automatic Segmentation Method of Internal Organ Regions Using Graph Cuts 査読有り

    Tani, Li, Kim, Tan, Ishikawa, Yamamoto

    The 12th International Symposiumm on Advanced Intelligent Systems   47 - 50   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2011年09月28日  -  2011年10月01日

  • 胸部CT画像のための経時的差分像生成法と臨床用アプリケーションの開発 査読有り

    金,三宅,前田,タン,石川,村上,青木

    CT検診   18 ( 2 )   107 - 113   2011年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Detection of Lung Nodules in Thoracic MDCT Images Based on Temporal Changes from Previous and Current Images 査読有り

    Maeda, Tomiyama, Kim, Miyake, Itai, Tan, Ishikawa, Yamamoto

    Journal of Advanced Computational Intelligence and Intelligent Informatics   15 ( 6 )   707 - 713   2011年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • オトガイ孔の位置情報とヘッセ行列を用いた歯科CT画像からの下顎管領域の抽出 査読有り

    黒木,三戸,金,タン,石川,田中,鬼頭,森本

    バイオメディカル・ファジィ・システム学会誌   13 ( 1 )   109 - 112   2011年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 過去・現在の胸部MDCT像セットを用いた経時的差分像技術の開発 査読有り

    三宅,金,前田,タン,石川,村上,青木,山本

    バイオメディカル・ファジィ・システム学会誌   13 ( 1 )   73 - 80   2011年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Various aspects of a structured motion database for efficient human motion recognition 査読有り

    Eftakhar, S.M.A., Tan, J. K., Kim, H., Ishikawa, S.

    International Journal for Computational Vision and Biomechanics   4 ( 1 )   11 - 20   2011年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Reduction of processing times for temporal subtraction on lung CT image employing octree algorithms 査読有り

    Shinya Maeda, Hyoungseop Kim, Yoshinori Itai, Joo Kooi Tan, Seiji Ishikawa, Akiyoshi Yamamoto:

    International Journal of Innovative Computing,Information and Control   7 ( 5 )   2603 - 2610   2011年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • A Registration Method for Temporal Subtraction on Thoracic MDCT Images 査読有り

    Kim, Miyake, Maeda, Tan, Ishikawa, Murakami, Aoki

    Proceedings of the 4th Annual Meeting of Asian Association for Algorithms and Computation   64   2011年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taiwan   2011年04月16日  -  2011年04月17日

  • Registration of head ct and mr image sets based on optimal retrieval of volume of interests 査読有り

    Yamamura Y., Harada K., Kim H., Kooi Tan J., Ishikawa S., Yamamoto A.

    ICIC Express Letters, Part B: Applications   2 ( 1 )   101 - 106   2011年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Image registration is one of the important techniques for diagnosis or treat- ment using medical images. Especially, it is very efficient to detect abnormalities by comparing two or more images. The images, which are provided from multi modalities or taken several different times, are used for diagnosis. One of the multi modal images is CT and MR imaging of the head for diagnosis and surgical planning indicates that physicians and surgeons gain important information from these modalities. In general, to simulate before the surgical operation, manual registration techniques are mainly per- formed on MR and CT images of the head. However, manual registration of the object may require several hours for analysis based on anatomical knowledge. In this paper, we describe a new method for image registration technique of head images obtained from CT and MRI. Our new method uses an optimal retrieval on neighbor volumes of interest (VOIs) and maximization of mutual information. This method also can reduce the pro-cessing time without decreasing the registration accuracy. © 2011 ISSN 2185-2766.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=79251503899&origin=inward

  • Registration of head CT and MR image sets based on optimal retrieval of volume of interests 査読有り

    Shinya Maeda, Hyoungseop Kim, Yoshinori Itai, Joo Kooi Tan, Seiji Ishikawa, Akiyoshi Yamamoto:

    ICIC Express Letters Part B: Applications   2 ( 1 )   101 - 106   2011年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • 3-D modeling of dynamic remote environments employing the images from cell-phone cameras and a communication network 査読有り

    Ozaki M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   48 - 51   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper describes a system for 3-D modeling of dynamic remote environments employing the images provided from cell-phone cameras and a communication network. In the system, we employ two cell-phones to acquire images. Each image is sent to a lab by an e-mail and employed for the 3-D modeling. For the algorithm of 3-D recovery, we employ a mobile stereo vision method, since this method is based on the factorization method and a steady solution can be expected. In the experiment, we tested the influence of the restoration error by using two cell-phones and obtained an excellent result. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255179892&origin=inward

  • Multiple persons' action recognition by fast human detection 査読有り

    Eftakhar S., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1639 - 1644   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recognition of the actions of several persons in a scene is gaining importance for human action analysis. Such a kind of system capable of recognizing human actions in a scene is proposed in this paper. We have adopted a fast human detector using Histograms of Oriented Gradients (HOG) and Support Vector Machine (SVM) to robustly trace individual persons. Person-wise bounding blobs are extracted and individual features are tracked in subsequent frames. The recognition is performed individually by comparing with the known motion templates using a high-speed action database. The technique guarantees improved performance in terms of detection and recognition. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255147356&origin=inward

  • Improved approach for action recognition based on local and global features 査読有り

    Ahad M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1645 - 1649   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper presents an improved spatio-temporal (XYT) approach for local interest point-based global action representation, considering the history of moving points in an action. The presented spatio-temporal representation demonstrate robust results and we compare the developed method with previous other method. This is a SURF-based method where we extract visual features to select candidate points based on the SURF detector. Afterwards, motion features are extracted by exploiting the local interest points and by employing optical flow. RANSAC is employed to reduce the unwanted outliers and improve the performance of the method. Based on an outdoor action dataset, we have found that the developed method demonstrate satisfactory recognition results. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255210497&origin=inward

  • Buildings recovery employing Manhattan-world constraint 査読有り

    Ohyama Y., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   431 - 434   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The structure from Motion (SfM) algorithm is an established method of shape recovery from a single video sequence. However, limitation of the method is the accuracy of the recovery due to poor texture of a recovered plane or a large scale object. We propose a technique for improving the precision of the recovery by applying the Manhattan-world constraint to the SfM algorithm, which assumes that the buildings are composed of vertical and horizontal planes. We show its effectiveness by the experiments performed in a real-life environment. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255178501&origin=inward

  • Action dataset - A survey 査読有り

    Ahad M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1650 - 1655   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Human action understanding and recognition have various demands for different applications in the field of computer vision and human-machine interaction. Due to these issues, more than a decade, extensive researches are going on in this arena - to recognize various actions and activities. Researchers have been exploiting various action datasets and some of them become prominent. Though there are some good datasets, unfortunately, to have a strong survey on these datasets has been a long due. This paper attempts this and presents the key datasets and analyzes them in different perspectives. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255147357&origin=inward

  • 3-D recovery of a non-rigid object from a single camera view 査読有り

    Ishikawa S., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   447 - 450   2011年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a technique for 3-D recovery of a non-rigid object, such as a moving person, from a single camera view. Recovery of a non-rigid object is not possible from a single camera view without any condition. In this paper, we propose a single camera technique for recovering a non-rigid object under the condition that the object is composed of a set of rigid objects. The experiments employing real motion data show effectiveness of the proposed technique. © 2011 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=81255128065&origin=inward

  • Solving boundary problem of the motion database for improved human motion recognition 査読有り

    S.M.Ashik Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc.of the 6th International Conferece on Electrical and Computer Engineering(ICECE2010)   698 - 701   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Dhaka   2010年12月18日  -  2010年12月20日

    主要論文集(会議)

    DOI: 10.1109/ICELCE.2010.5700789

    Scopus

  • Recognition human motions from surrounding viewpoints employing hierarchical eigenspaces 査読有り

    S.M.Ashik Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc. of the 11th International Conference on Control,Automation, Robotics and Vision(ICARCV2010),   2179 - 2184   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2010年12月05日  -  2010年12月08日

    主要論文集(会議)

  • Recognizing human motions from surrounding viewpoints employing hierarchical eigenspaces 査読有り

    Eftakhar S., Tan J., Kim H., Ishikawa S.

    11th International Conference on Control, Automation, Robotics and Vision, ICARCV 2010   2179 - 2184   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The development of an automatic human motion recognition system leads to the solution to the problems concerning the video-based applications in recognizing human activities. Such a system is to be investigated in the context of human motion analysis. Although there were a large number of researches in this area for a long time, there was little attention given to the development of a structured database for successful retrieval of motion data incorporating the time-space trade-off. We have proposed a system which is capable of dealing with large set of motion data employing an efficient database structure with improved performance. We have analyzed two motion representation techniques to realize the effectiveness of the system. Performance evaluation is performed by synthesized 3D human motions observed from eight camera directions. Finally, our results show that the proposed recognition scheme performs well for the captured motions. ©2010 IEEE.

    DOI: 10.1109/ICARCV.2010.5707401

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=79952382487&origin=inward

  • On detecting a human and its body direction from a video 査読有り

    Nakashima Y., Tan J., Ishikawa S., Morie T.

    Proceedings of the 15th International Symposium on Artificial Life and Robotics, AROB 15th'10   294 - 297   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper describes a novel technique for detecting a human and its body direction using HOG feature. The HOG feature is a well-known feature for the judgment of a human. But normally it contains the background feature, which gives negative influence on the judgment. This paper proposes the employment of the HOG feature based on a human model. The feature is also employed for detecting human body direction. Experimental results show effectiveness of the proposed technique compared to the conventional one. © 2010 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649252315&origin=inward

  • Modeling of human postures using stereo camera 査読有り

    Yoshida S., Tan J., Kim H., Ishikawa S.

    ICCAS 2010 - International Conference on Control, Automation and Systems   1432 - 1435   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes an automatic human posture acquisition technique by a robot. Decrease of working population will be a vital problem in our society. Escaping from the work in danger is also important for a human being to raise QOL. To solve all theses issues, it may be quite useful to realize a robot which works in place of a human. Based on this background, our research aims at making a technique for a robot to acquire various human postures/motions automatically. In this paper, a technique is presented for obtaining human postures automatically by employing a human skeleton model and a stereo camera system. Performance of the technique is shown experimentally. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751531236&origin=inward

  • Human motion recognition using directional motion history images 査読有り

    Murakami M., Tan J., Kim H., Ishikawa S.

    ICCAS 2010 - International Conference on Control, Automation and Systems   1445 - 1449   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Nowadays many persons are needed to observe images from surveillance cameras, because many surveillance cameras are installed in a town or in buildings. They are working under strain, because they must always watch the images from surveillance cameras to find a person with abnormal motion. Therefore reduction of the load is necessary. Many existing researches on human motion recognition are only recognition and the information of the result is not used. We assume that human motion is a set of basic motions. So, an overall motion can be understood using the recognition result of basic motions. The goal of the present research is to develop a method of human motion representation and translation using directional motion history images (DMHIs). In this paper, we describe a method of recognizing basic motions using the DMHIs. We perform the recognition by the Histograms of Oriented Gradients (HOG) feature. In the experiment, the number of bins and local area (cell) sizes for calculating the HOG feature are changed and the most suitable values are inspected. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751548634&origin=inward

  • Human behavior analysis with optical flow and median-filtered Temporal Motion Segmentation method 査読有り

    Ahad M., Tan J., Kim H., Ishikawa S.

    Proceedings of the 15th International Symposium on Artificial Life and Robotics, AROB 15th'10   1022 - 1025   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    We focus on human activity analysis so that an intelligent system (e.g., a robot) can easily understand some important activities and help thereby. Hence we present an improved method for activity analysis, called Medianfiltered Temporal Motion Segmentation (MfTMS) method, which can segment and understand motion temporally from the video sequence. It is based on the computation of optical flow and thereafter split it into four different channels. Later median filtering is applied and we produce four motion-history templates based on the directional motion vectors. Based on the total pixel volumes on these history templates and their related variations, various directions of the action primitives are segmented temporally. We conduct experimentations both indoor and outdoor environments and achieved sound performance. This segmentation method can assist an intelligent system or a robot to understand activities and take decisions afterwards. It is a simple but robust approach. © 2010 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866648695&origin=inward

  • Detection of blood vessel regions using weighted MIP from non-enhanced MR imaging 査読有り

    Koga Y., Yamamoto A., Kim H., Tan J., Ishikawa S.

    ICCAS 2010 - International Conference on Control, Automation and Systems   2078 - 2081   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, arterio sclerosis obliterans (ASO) caused by the chronic ischemia of limbs increase remarkably. As one of the diagnosis methods of this disease, the image diagnosis such as MR image is used on medical fields. In this paper, we propose a method to develop medical diagnostic support system using fresh blood imaging (FBI) method, in which the structure of blood vessel regions in legs could be analyzed from non-enhanced MR images. As the method, an initial point of the vessel regions in the artery areas is acquired from weighted MIP images automatically. Then extend the artery areas using region growing method. Especially, we developed for the position relationship of the crossed blood vessel and the segment accuracy in peripheral artery areas that is improved by using weighted MIP images that the result could not be gotten in conventional MIP images. To evaluate our proposed method, we performed our proposed method and satisfactory results are achieved. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751514149&origin=inward

  • Detecting a human body direction using a feature selection method 査読有り

    Nakashima Y., Tan J., Ishikawa S., Morie T.

    ICCAS 2010 - International Conference on Control, Automation and Systems   1424 - 1427   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper describes a novel technique for detecting a human body direction using SVM constructed by HOG feature selected by AdaBoost. HOG feature is well-known feature for the robust judgment of a human. We employ the feature for detecting a human body direction. We compared some feature selecting methods with the previous one. Experimental results show effectiveness of the proposed method. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751524891&origin=inward

  • Automatic detection of pedestrians from stereo camera images 査読有り

    Tan J., Inumaru K., Ishikawa S., Morie T.

    Artificial Life and Robotics   15 ( 4 )   459 - 463   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    We propose a technique for detecting pedestrians by employing stereo camera images and based on probabilistic voting. From a disparity map, each pixel on the image is voted on a depth map employing a 2-D Gaussian distribution. The region having the peak value in the vote is chosen as the foot of an object. The object is specified by a rectangle on the right image, which is referred to as the region of interest (ROI). This ROI is described by HOG features, and is judged by SVM if it contains a person. With an ROI containing a person, a Kalman filter is applied to track the person through successive image frames. The performance of the detection of people was evaluated by employing ground truth data. The ratio of people detected to the ground truth data, called the recall rate, was 80%. This is a satisfactory result. © 2010 International Symposium on Artificial Life and Robotics (ISAROB).

    DOI: 10.1007/s10015-010-0843-2

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78651345756&origin=inward

  • Automatic detection of pedestrians from stereo camera images 査読有り

    Inumaru K., Tan J., Ishikawa S., Morie T.

    Proceedings of the 15th International Symposium on Artificial Life and Robotics, AROB 15th'10   1018 - 1021   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    we propose a technique for detecting pedestrians employing stereo camera images based on probabilistic voting. From a disparity map, each pixel on the image is voted on a depth map employing a 2-D Gaussian distribution. The region having a peak value of the vote is chosen as the foot of an object. The object is specified by a rectangle on the right image, which is referred to as a region of interest (ROI). This ROI is described by HOG features and it is judged by SVM if it contains a person. With the ROI containing a person, Kalman filter is applied to track the person through successive image frames. Performance of the detection of persons was evaluated employing a ground truth data. The rate of detected persons to the ground truth data, called a recall rate, was 80 %. This is a satisfactory result. © 2010 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866676321&origin=inward

  • A color-based particle filter for multiple objects tracking in outdoor environment 査読有り

    Sugandi B., Kim H., Tan J., Ishikawa S.

    Proceedings of the 15th International Symposium on Artificial Life and Robotics, AROB 15th'10   604 - 609   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Tracking of multiple objects is more challenging than tracking a single object. Some problems arise in multiple object tracking that do not exist in single object tracking such as objects occlusion, appearing new object and disappearing already existed object, updating the occluded object, etc. In this paper, we present an approach to handle multiple objects tracking in the presence of occlusions, background clutter and appearance change. The occlusion is handled considering the predicted trajectories of the objects based on a dynamic model and likelihood measures. We propose also a target-model update conditions, ensuring the proper tracking multiple objects. The proposed method is implemented in the probabilistic framework such as particle filter in conjunction with a color feature. The particle filter has proven very successful for non-linear and non-Gaussian estimation problems. It approximates a posterior probability density of the state such as the object position by using samples or particles, where each state is denoted as the hypothetical state of the tracked object and its weight. The observation likelihood of the objects is modeled based on color histogram. The sample weight is measured based on Bhattacharya coefficient that measures the similarity between each sample's histogram with a specified target model. The algorithm can track the multiple objects in the present of occlusion and noise successfully. Some experimental results show the effectiveness of our method to track the multiple objects. © 2010 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866680510&origin=inward

  • Segmentation method for phalanges in CR image by use of DCT 査読有り

    Hozu Y., Murakami S., Kim H., Tan J., Ishikawa S., Aoki T.

    ICCAS 2010 - International Conference on Control, Automation and Systems   2082 - 2085   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a CAD (Computer aided diagnosis) system to analyze the RA (rheumatoid arthritis) and osteoporosis by using image processing techniques from the CR images. To analyze the RA, we develop a segmentation method for phalanges in CR Image by use of DCT (Discrete Cosine Transform) for detection of temporal change. The temporal change is detected using the difference image between previous image and current one. The DCT is performed to emphasize the edge of the difference image. Finally, the phalanges are extracted by performing Snakes. The primary objective of this study is to segment phalanges by making temporal subtraction images. We apply our proposed technique to eight cases of CR images and satisfactory segmentation results are achieved. A new index that diagnoses the progress level of the disease of phalanges can be offered as a second opinion. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751512171&origin=inward

  • Segmentation of artery areas on none-enhanced fresh blood imaging based on dot enhancement filter and 3-D region growing method 査読有り

    Yamamoto A., Nishizaki T., Kim H., Tan J., Ishikawa S.

    Proceedings of the 15th International Symposium on Artificial Life and Robotics, AROB 15th'10   493 - 496   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Peripheral arterial disease (PAD) is characterized by lower extremities arterial obstruction due to athe rosclerosis and manifests in lower extremities as intermittent claudicating, limb ischemia, or gangrene. The dia gnosis of the PAD using magnetic resonance imaging (MRI) equipment without contrast medium is available as a useful visual screening in clinical practice. In this paper, we propose a new method for segmentation of arterial images, which are obtained from non-contrast enhanced magnetic resonance angiography (MRA) based on dot enhancement filter and 3-D region growing methods, and satisfactory experimental results are obtained. © 2010 ISAROB.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84866686962&origin=inward

  • Segmentation method for cardiac region in CT images based on active shape model 査読有り

    Takahashi H., Komatsu M., Kim H., Tan J., Ishikawa S., Yamamoto A.

    ICCAS 2010 - International Conference on Control, Automation and Systems   2074 - 2077   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Recently, multi detector row computed tomography (MDCT) has been introduced into medical fields. By the development of MDCT, images with high quality are provided into medical fields. So many related image processing techniques are proposed into medical image processing fields for extraction of abnormal area. In the medical image processing field, segmentation is one of the most important problems for analyzing the abnormalities and recognition of internal structures before the operation. For this reason, many approaches are proposed for detection of abnormal area on CT images. Before detection of abnormal areas, segmentation of organs in CT images is one of the most important problems for analyzing of disease. However, poor contrast, image noises and motion artifacts make this segmentation problem difficult in particular in cardiac region. Moreover, there are still no fully automatic segmentation methods for cardiac region on CT images. In this paper, we present automatic extraction technique for detection of cardiac region. Our proposed technique combines active shape model (ASM) and genetic algorithm (GA). We apply our proposed technique to five real CT images and satisfactory segmentation results are achieved. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751513157&origin=inward

  • Method for reduction of metal artifacts in dental CT images by use of frequency analysis 査読有り

    Mito K., Kim H., Tan J., Ishikawa S., Tanaka T., Morimoto Y.

    ICCAS 2010 - International Conference on Control, Automation and Systems   1271 - 1274   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    CAD (Computer Aided Diagnosis) system that has 3-D viewer function to display the DICOM (Digital Imaging and Communication in Medicine) image has been developed in recent years. Now, the function of CAD system for artificial implant to which development is advanced includes the removal of artifact in the input DICOM image, region extraction of bone, major vessel, and nerve and mounting of function to presume place where implant is stricken. However, it is still required a software with high performance because of taking the processing time to do a series of operation before simulating by the strike of the implant on the operation on a clinical site. In this paper, we propose the function to extract the area of bone separately for teeth and mandibles automatically as addition to the system that can do a series of operation before simulating by the strike of the implant. In addition, we proposed also the function to extract each tooth separately. The main aim of this paper is to develop the integrative system for the dental screening that can be operated in a personal computer. ©ICROS.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78751559566&origin=inward

  • HOG特徴量と人マスクを用いた人物及び身体方向の検出 査読有り

    中島祐樹,タンジュークイ,石川聖二,森江隆

    画像電子学会誌   39 ( 6 )   1104 - 1111   2010年11月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Segmentation method for phalanges in CR image by use of DCT 査読有り

    Yoshimichi Hozu,Seiichi Murakami,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Takatoshi Aoki

    Proc. of International Conference on Control, Automation and Systems 2010   2082 - 2085   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

  • Detection of blood vessel regions using weighted MIP from non-enhanced MR imaging 査読有り

    Yuiko Koga,Akiyoshi Yamamoto,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Proc. of International Conference on Control, Automation and Systems 2010   2078 - 2081   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

  • Segmentation method for cardiac region in CT images based on active shape model 査読有り

    Hiroki Takahashi,Masafumi Komatsu,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Akiyoshi Yamamoto

    Proc.o f International Conference on Control, Automation and Systems 2010   2074 - 2077   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

  • Method for reduction of metal artifacts in dental CT images by use of frequency analysis 査読有り

    Keishi Mito,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Tatsuro Tanaka,Yasugiro Morimoto

    Proc. of International Conference on Control, Automation and Systems 2010   1271 - 1274   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

  • Modeling of human postures using stereo camera 査読有り

    Shigeo Yoshida ,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc. of International Conference on Control, Automation and Systems 2010   1432 - 1435   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

  • Detecting a human body direction using a feature selection method 査読有り

    Yuki Nakashima,Joo Kooi Tan,Seiji Ishikawa,Takashi Morie

    Proc.of International Conference on Control, Automation and Systems 2010   1424 - 1427   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

    主要論文集(会議)

  • Human motion recognition using directional motion history images 査読有り

    Makoto Murakami,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc.of International Conference on Control, Automation and Systems 2010   1445 - 1449   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2010年10月27日  -  2010年10月30日

    主要論文集(会議)

  • Motion History Image: Its Variants and Applications 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Machine Vision and Applications   1 - 27   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • A robust face tracking method employing color-based particle filter 査読有り

    Yuji Nishina, Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    International Journal ofBiomedical Soft Computing and Human Sciences   Vol.16 ( No.1 )   127 - 134   2010年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Viewpoint-oriented human activity recognition in a cluttered outdoor environment 査読有り

    S.M.Ashik Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc.of SICE Annual Conference 2010   1506 - 1511   2010年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2010年08月18日  -  2010年08月21日

    主要論文集(会議)

  • Three-dimensional modelling of a non-rigid object employing a single camera 査読有り

    Heewook Jung Joo Kooi Tan Hyoungseop Kim Seiji Ishikawa

    Proc. of SICE Annual Conference 2010   2972 - 2975   2010年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2010年08月18日  -  2010年08月21日

  • Creating an entire object model employing virtual see-through cameras 査読有り

    Toshimasa Sone,Meishan Piao,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc. of SICE Annual Conference 2010   2504 - 2506   2010年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2010年08月18日  -  2010年08月21日

    主要論文集(会議)

  • On detecting a human body direction using an image information 査読有り

    Yuki Nakashima,Joo Kooi Tan,Seiji Ishikawa,Takashi Morie

    Proc.of SICE Annual Conference 2010,   1521 - 1522   2010年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2010年08月18日  -  2010年08月21日

  • Human motion recognition using derectional motion history images 査読有り

    Makoto Murakami,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proc.of SICE Annual Conference 2010   1512 - 1514   2010年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2010年08月18日  -  2010年08月21日

    主要論文集(会議)

  • Tracking of multiple moving objects under outdoor environment using color-based particle filter 査読有り

    Budi Sugandi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Proc. of 2010 3rd IEEE International Conference on Computer Science and Informastion Technology   103 - 107   2010年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Chengdu   2010年07月07日  -  2010年07月10日

    DOI: 10.1109/ICCSIT.2010.5563884

    Scopus

  • Lower-Dimensional Feature Sets for Template-based Motion Recognition Approaches 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Journal of Computer Science, Science Publication   6 ( 8 )   920 - 927   2010年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)

    主要雑誌

    DOI: 10.3844/jcssp.2010.920.927

    Scopus

  • A Simple Approach for Low-Resolution Activity Recognition 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    International Journal for Computational Vision and Biomechanics (IJCVB   3 ( 1 )   17 - 24   2010年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌

  • 3次元リージョングローイングとテンプレート・マッチングを用いたCT画像からの肺臓・心臓領域の自動抽出 査読有り

    小松昌史,高橋弘樹,金亨燮,タンジュークイ,石川聖二,山本晃義

    ”,バイオメディカル・ファジイ・システム学会誌   12 ( 1 )   93 - 98   2010年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 車載カメラ映像からの横断歩道上の歩行者の検出法 査読有り

    タンジュークイ,三好誠,石川聖二,森江隆

    バイオメディカル・ファジイ・システム学会誌   12 ( 1 )   99 - 103   2010年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • 仮想透視カメラ群による物体の全周囲復元法と人動作の立体モデル化への応用 査読有り

    タンジュークイ,曽根俊昌,石川聖二,金亨燮,四宮孝史

    バイオメディカル・ファジイ・システム学会誌   12 ( 1 )   71 - 77   2010年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Automatic detection of pedestrians from stereo camera images 査読有り

    J.K. Tan,K. Inumaru,S. Ishikawa,T. Morie

    Journal of Artificial Life and Robotics   15 ( 4 )   459 - 463   2010年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • On detecting a human and its body direction from a video 査読有り

    Y. Nakashima,J.K. Tan,S. Ishikawa,T. Morie

    Journal of Artificial Life and Robotics   15 ( 4 )   455 - 458   2010年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌

    DOI: 10.1007/s10015-010-0841-4

    Scopus

  • Approaches for Global-based action representations for games and action undesstanding 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    2nd International workshop on Facial and Bodily Expressions for Control and Adaptation of Games (ECAG'11) in conjunction with the IEEE International conference on Face and Gesture Recognition   CD-R   2010年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    America   California ,Santa Barbara   2010年03月20日  -  2010年03月25日

    DOI: 10.1109/FG.2011.5771342

    Scopus

  • Action Recognition by Employing Combined Directional Motion History and Energy Images 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Proceeding of IEEE Int. Conf. on Computer Vision and Pattern Recognition's workshop on Computer Vision and Computer Games(CVCG10),   73 - 78   2010年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    America   San Francisco   2010年03月13日  -  2010年03月18日

    DOI: 10.1109/CVPRW.2010.5543160

    Scopus

  • SURF-based spatio-temporal history image method for action Representation 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Proc. of 2011 Joint IEEE International Conference on Industrial Tenology&43rd Southeastern Symposium on System Theory   397 - 402   2010年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Alabama   Auburn   2010年03月14日  -  2010年03月16日

    DOI: 10.1109/ICIT.2011.5754412

    Scopus

  • Reduced Feature Vector Analysis for Motion Recognition 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Proceedings of International Symposium on Robotics and Intelligent Sensors (IRIS)   315 - 320   2010年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    名古屋   2010年03月08日  -  2010年03月11日

  • Analysis of motion self-occlusion problem due to motion overwriting for human activity recognition 査読有り

    Ahad M., Tan J., Kim H., Ishikawa S.

    Journal of Multimedia   5 ( 1 )   36 - 46   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Various recognition methodologies address to recognize and understand varieties of human activities. However, motion self-occlusion due to motion overlapping in the same region is a daunting task to solve. Various motion-recognition methods either bypass this problem or solve this problem in complex manner. Appearance-based template matching paradigms are simpler and hence these approaches faster for activity analysis. In this paper, we concentrate on motion self-occlusion problem due to motion overlapping in various complex activities for recognition. In the Motion History Image (MHI) method, the self-occlusion is evident and it should be solved. Therefore, this paper compares our directional motion history image concept with basic the Motion History Image, Multi-level Motion History representation and Hierarchical Motion History Histogram representation to solve the self-occlusion problem of basic the Motion History Image representation. We employ some complex aerobics and find the robustness of our method compared to other methods for this self-occlusion problem. We employ seven higher order Hu moments to compute the feature vector for each activity. Afterwards, k-nearest neighbor method is utilized for classification with leave-one-out paradigm. The comparative results clearly demonstrate the superiority of our method than other recent approaches. We also present several experiments to demonstrate the performance and strength of the DMHI method in recognizing various complex actions. © 2010 ACADEMY PUBLISHER.

    DOI: 10.4304/jmm.5.1

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78651574909&origin=inward

  • Automatic Detection of Pedestrians from Stereo Camera Images 査読有り

    Kazuki Inumaru,Joo Kooi Tan,Seiji Ishikawa,Takashi Morie

    Proceedings of the 15th International Symposium on Artificial Life and Robotics   CD-R   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年02月04日  -  2010年02月06日

  • A color-based particle filter for multiple objects tracking in outdoor environment 査読有り

    Sugandi,Kim,Tan,Ishikawa

    Proceedings of the 15th International Symposium on Artificial Life and Robotics   604 - 609   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年02月04日  -  2010年02月06日

  • Segmentation of artery areas on none-enhanced fresh blood imaging based on dot enhancement filter and 3-D region growing method 査読有り

    A. Yamamoto,T. Nishizaki,H.S.Kim,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of the 15th International Symposium on Artificial Life and Robotics   493 - 496   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年02月04日  -  2010年02月06日

  • Human behavior analysis with optical flow and median-filtered temporal motion segmentation method 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,H.S. Kim,Seiji Ishikawa

    Proceedings of the 15th International Symposium on Artificial Life and Robotics   CD-R   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年02月04日  -  2010年02月06日

  • On detecting a human and its body direction from a video 査読有り

    Yuki Nakashima,Joo Kooi Tan,Seiji Ishikawa,Takashi Morie

    Proceedings of the 15th International Symposium on Artificial Life and Robotics   CD-R   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年02月04日  -  2010年02月06日

  • Analysis of Motion Self-Occlusion Problem for Human Activity Recognition 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Journal of Multimedia, Academy Publisher   5 ( 1 )   36 - 46   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • A color-based particle filter for multiple objects tracking in outdoor environment 査読有り

    Sugandi,Kim,Tan,Ishikawa

    Journal of Artificial Life and Robotics   15 ( 1 )   41 - 47   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s10015-010-0762-2

    Scopus

  • Segmentation of artery areas on non-enhanced fresh blood imaging based on a dot enhancement filter and the 3-D region-growing method 査読有り

    A. Yamamoto,T. Nishizaki,H. Kim,J.K. Tan,S. Ishikawa

    Journal of Artificial Life and Robotics   15 ( 1 )   48 - 52   2010年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s10015-010-0764-0

    Scopus

  • Obstacle detection using a moving camera 査読有り

    Qian Shao Hua,Tan JooKooi,Ishikawa Seiji,Morie Takashi

    Proc. of the 16th Int. Sympo. on AROB   767 - 770   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年01月27日  -  2010年01月29日

  • Detecting human flows on a road different from main flows 査読有り

    Park Meishan,Tan Joo Kooi,Nakashima Yuuki,Kim Hyoungseop,Ishikawa Seiji

    Proc. of the 16th Int. Sympo. on AROB,   793 - 796   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年01月27日  -  2010年01月29日

  • Detection of bicycle and bicycle's driving direction using HOG features 査読有り

    Jung Heewook,Tan Joo Kooi,Kim Hyoungseop ,Ishikawa Seiji

    Proc. of the 16th Int. Sympo. on AROB   781 - 784   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2010年01月27日  -  2010年01月29日

  • Separating Pedestrian at a traffic junction from the background detected from a car video 査読有り

    Tan Joo Kooi,Miyoshi Makoto,Ishikawa Seiji,Morie Takashi

    Proc. of the International Workshop on Advanced Image Technology 2010   CD-R   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Indonesia   Jakarta   2010年01月06日  -  2010年01月08日

    主要論文集(会議) 代表的研究業績

  • Creating an entire object model employing virtual see-through cameras 査読有り

    Sone T., Piao M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   2504 - 2506   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a novel 3-D modeling technique of an object. The existent 3-D modeling techniques such as stereo vision or factorization recovers 3-D shapes of part of an object observed commonly from multiple orientations. This is followed by registration among recovered partial shapes in order to obtain an entire 3-D model of the object, resulting in some geometrical errors in the linked shape. The proposed technique creates entire shape of an object simultaneously without partial shape registration by the employment of virtual see-through cameras that surround the object and observes its rear part as well as the frontal part. All the surrounding cameras are calibrated initially using the captured images. The frontal side of an object concerned recovers its 3-D shape by the factorization method applied to the image data obtained from the frontal cameras. The recovered frontal shape is virtually projected onto the image planes of the rear cameras. This procedure realizes that all the surrounding cameras observe all the shape of the object, irrespective of frontal or rear. Then the factorization method is applied again to this situation to yield an entire 3-D model of the object. A nonlinear optimization method is employed in the technique to achieve the modeling of higher precision. The proposed technique is applied to the 3-D modeling of a mini basket ball match and their activity is successfully modeled in a 3-D way. © 2010 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649296463&origin=inward

  • Viewpoint-oriented human activity recognition in a cluttered outdoor environment 査読有り

    Eftakhar S., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1506 - 1511   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Motion analysis in a cluttered outdoor scenario is a real challenging task. In this paper, a suitable human activity recognition approach is proposed that adapts with the noisy environment and with variable viewpoints. The recognition approach encompasses some motion processing methodologies having high precision for specific analysis, among those are: motion segmentation, optical flow computation, Motion History Image (MHI) generation, structured database development, and so on. Estimating the subject's moving body region, 4-directional MHIs of each activity are generated. An efficient pre-developed motion database aids in recognition of the activities. The proposed approach proves its significant precision in recognition for various activities from varying viewpoints. © 2010 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649234809&origin=inward

  • Three-dimensional modeling of a non-rigid object employing a single camera 査読有り

    Horinouchi T., Jung H., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   2972 - 2975   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper proposes a technique for recovering 3-D shape of a non-rigid object employing a single camera. Normally two or more cameras are necessary for recovering 3-D shape of a non-rigid object. Shape recovery of a non-rigid object employing a single camera has recently been studied by a few researchers, but they deal only with small deformation. The technique proposed in this paper achieves the shape recovery of larger deformation by regarding a non-rigid object as a set of rigid parts. Each rigid part recovers its 3-D shape by the application of the factorization method and they are connected mutually to represent an overall deformation of the object. The present technique also considers the shape recovery of an occluded part caused by ego motion. Effectiveness of the proposed technique is shown by experiment. © 2010 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649236530&origin=inward

  • On detecting a human body direction using an image information 査読有り

    Nakashima Y., Tan J., Ishikawa S., Morie T.

    Proceedings of the SICE Annual Conference   1521 - 1522   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper describes a novel technique for detecting a human body direction using HOG feature. The HOG feature is a well-known feature for the judgment of a human. But normally it contains the background feature, which gives negative influence on the judgment. This paper proposes the employment of the HOG feature based on selecting feature methods. We employ the feature for detecting a human body direction. Experimental results show effectiveness of the proposed technique compared to the conventional one. © 2010 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649291098&origin=inward

  • Human motion recognition using directional motion history images 査読有り

    Murakami M., Tan J., Kim H., Ishikawa S.

    Proceedings of the SICE Annual Conference   1512 - 1514   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Nowadays many persons are needed to observe images from surveillance cameras, because many surveillance cameras are installed in a town or in buildings. They are working under stress, because they must always watch the images from surveillance cameras to find a person with abnormal motion. Therefore reduction of the load is necessary. Many existing researches on human motion recognition are only recognition and the information of the result is not used. We assume that human motion is a set of basic motions. So, we think we are able to translate an overall motion using the recognition result of basic motions. The goal of the present research is to develop a method of human motion representation and translation using directional motion history images (DMHIs). In this paper, we describe a method of recognizing basic motions using the DMHIs. We perform the recognition by matching of a graph. When the matching of the graph is performed, the intervals of chosen image frames are changed and the most suitable interval is inspected. © 2010 SICE.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78649280229&origin=inward

  • Automatic detection of GGO candidate regions by using density and shape features 査読有り

    Kim H., Katsumata Y., Itai Y., Tan J., Ishikawa S.

    International Journal of Innovative Computing, Information and Control   6 ( 1 )   255 - 262   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Various imaging equipments have been introduced into medical fields. Es-pecially, high resolution helical computed tomography (HRCT) is one of the most useful diagnosis systems because it provides a high resolution image to medical doctors as a clear image. Radiologist can easily detect abnormalities by use of the clear images. Detection of abnormal areas such as lung nodule, ground glass opacity on multi detector computed tomography images is a difficult task for radiologists. It is because subtle lesions such as small lung nodules tend to be low in contras, and a large number of computed tomogra-phy images require a long visual screening times. To detect the abnormalities by use of computer aided diagnosis (CAD) system, some technical methods for detecting the ab-normalities have been proposed in medical field. Despite of these efforts, their approach did not succeed because of difficulty of image processing in detecting the ground glass opacity (GGO) areas exactly. Thus they did not reach to the stage of automatic detec-tion employing unknown thoracic MDCT data sets. In this paper, we develop a CAD system for automatic detecting of GGO areas from thoracic MDCT images by use of five statistical features which are obtained four density features and one shape feature. The proposed technique applied on 31 MDCT image sets. 79.4 [%] of recognition rates and 1.07 of false positive rates was achieved. Some experimental results are shown along with a discussion. © 2010 ISSN.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=76549095216&origin=inward

  • Automatic detection of GGO candidate regions by using density and shape features 査読有り

    Hyoungseop Kim,Yoshifumi Katsumata,Joo Kooi Tan,Seiji Ishikawa

    International Jopurnal of Innovative Computing, Information and Control   6 ( 1 )   255 - 262   2010年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic detection and tracking of moving object employing a particle filter 査読有り

    Sugandi,Kim,Tan,Ishikawa

    International Conference on Innovative Computing, Information and Control   CD-R   2009年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Kaohsiung   2009年12月07日  -  2009年12月09日

  • Automatic detection of lung nodules in temporal subtraction image by use of shape and density features 査読有り

    Miyake,Kim,Itai,Tan,Ishikawa,Katsuragawa

    International Conference on Innovative Computing, Information and Control   CD-R   2009年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Kaohsiung   2009年12月07日  -  2009年12月09日

  • Automatic classification of GGO regions employing linear discriminant function from the thoracic CT images 査読有り

    Seiji Shozawa,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    ICIC Express Letters, An International Journal of Research and Surveys   3 ( 4(A) )   963 - 968   2009年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Three-dimensional Human Motion Modeling by Back Projection Based on Image-based Camera Calibration 査読有り

    Masaoka S.,Tan J.K.,Kim H.S.,Ishikawa S.,Shinomiya T.

    Journal of Artificial Life and Robotics, Springer Tokyo   14 ( 4 )   515 - 518   2009年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Development of registration method for head CTA and MRA images based on 2-D projection image 査読有り

    Shunsuke Horita,Kouhei Harada,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Akiyoshi Yamamoto

    ICIC Express Letters, An International Journal of Research and Surveys   3 ( 4(A) )   957 - 962   2009年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • An image registration method for head CT and MR imaging based on optimal retrieval 査読有り

    Yutaro Yamamura,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Akiyoshi Yamamoto

    International Journal of Innovative Computing, Information and Control   5 ( 11(B) )   4249 - 4254   2009年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Detection of GGO candidate regions by using edge enhancement filter and statistical features 査読有り

    Hyoungseop Kim,Syed Faruk Amed ,Joo Kooi Tan,Seiji Ishikawa

    International Journal of Innovative Computing, Information and Control   5 ( 11(B) )   4267 - 4274   2009年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Development of Image Viewer for Dental CT Images 査読有り

    Kizuka,Mito,Kim,Tan,Ishikawa,Tanaka,Morimoto

    2nd Japan-Korea Joint Symposium on Bio-microsensing Technology   29 - 30   2009年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Kitakyushu   2009年11月13日  -  2009年11月13日

  • A moving object tracking based on color information employing a particle filter algorithm 査読有り

    Sugandi,Kim,Tan,Ishikawa

    Journal of Artificial Life and Robotics   14 ( 1 )   39 - 42   2009年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • A Particle Filter with Adaptive Model for Object Tracking 査読有り

    Sugandi Budi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   657 - 662   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Human Activity Analysis: Concentrating on Motion History Image and Its Variants 査読有り

    Md. Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   5401 - 5406   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Robust Human Motion Recognition Employing Adaptive Database Structure 査読有り

    S. M. A Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   3989 - 3994   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Recovering 3-D Shape of a Non-rigid Object by a Single Mobile Camera 査読有り

    Tsukasa Horinouchi,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   3846 - 3849   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Entire 3-D Modeling of an Object by Surround Cameras 査読有り

    Toshimasa Sone,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   4011 - 4014   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Recognizing Facial Expression for Man-machine Interaction 査読有り

    Wataru Hirata,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   1621 - 1624   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Detecting Pedestrians from a Video by Stereo Cameras 査読有り

    Kazuki Inumaru,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    ICROS-SICE International Joint Conference 2009   663 - 666   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukuoka   2009年08月18日  -  2009年08月21日

  • Reduction of processing times for temporal subtraction from the thoricic MDCT image employing octree algorithms 査読有り

    Shinya Maeda,Hyoungseop Kim,Yoshinori Itai,Joo Kooi Tan.,Seiji Ishikawa,Akiyoshi Yamamoto

    The 6th International Symposium on Management Engineering   CD-R   2009年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2009年08月05日  -  2009年08月07日

  • Automatic detection of GGO in CT lung images by using statiscal features and neural networks 査読有り

    Hyoungseop Kim,Yoshifumi Katsumata,Joo Kooi Tan,Seiji Ishikawa

    The 24th International Technical Conference on Circuits/Systems, Computers and Communications   705 - 708   2009年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Jeju   2009年07月05日  -  2009年07月08日

  • Automatic Detection of Spinal Deformity Based on Statistical Features from the Moire Topographic Images 査読有り

    Kim,Tan,Ishikawa,Shinomiya

    International Journal of Computing   8 ( 1 )   72 - 78   2009年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • A color features-based method for object tracking employing a particle filter algorithm 査読有り

    Budi Sugandi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Internatinal Conference on Power Control and Optimization   206 - 211   2009年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Indonesia   Bali   2009年06月01日  -  2009年06月03日

  • カメラ動き補償のための複数特徴点追跡 査読有り

    上村浩文,タンジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   11 ( 1 )   1 - 9   2009年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • 仮想透視カメラ群を用いた立体の全周形状一括復元 査読有り

    守 啓祐,曽根俊昌,タン ジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   11 ( 1 )   17 - 22   2009年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Temporal Motion Recognition and Segmentation Approach 査読有り

    Ahad M.A.R.,Tan JooKooi,Kim Hyoungseop,Ishikawa Seiji

    Int. J. of Imaging Systems and Technology (IJIST), Wiley Publications   19 ( 2 )   91 - 99   2009年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Human motion recognition employing large motion-database structure 査読有り

    S.M. Ashik Eftakhar,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    International Journal of Advanced Computer Engineering   2 ( 1 )   17 - 23   2009年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Real Time Tracking and Identification of moving person by using a Camera in outdoor environment 査読有り

    Budi Sugandi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Internatinal Journal of Innovative Computing, Information and Control   5 ( 5 )   1179 - 1188   2009年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic detection of blood vessels CTA images employing morphological operation 査読有り

    Itai Y.,Yamamoto A.,Kim H.S.,Tan J.K.,Ishikawa S.

    Journal of Artificial Life and Robotics   13 ( 2 )   428 - 433   2009年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Media-art employing virtual shadows with shape recognition 査読有り

    Hashimoto S.,Tan J.K.,Kim H.S.,Ishikawa S:

    International Journal of Innovative Computing, Information and Control   5 ( 3 )   601 - 607   2009年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Media-art employing virtual shadows with shape recognition 査読有り

    Hashimoto Sunao,Tan JooKooi,Kim Hyoungseop,Ishikawa Seiji

    International Journal of Innovative Computing, Information and Control   5 ( 3 )   601 - 607   2009年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • 拡張現実感技術の新しい展開―インタラクティブ・メディアとしての三次元空間の生成

    石川聖二,タンジュークイ,橋本直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 13 )   25 - 28   2009年02月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • Entire shape recovery employing virtual see-through cameras 査読有り

    Mori K.,Sone T.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. of the 14th Int. Sympo. on AROB,   577 - 580   2009年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Oita,Beppu   2009年02月05日  -  2009年02月07日

  • A moving object tracking based on color information employing a particle filter algorithm 査読有り

    Sugandi,Kim,Tan,Ishikawa

    Proc. of the 14th Int. Sympo. on AROB,   194 - 197   2009年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Oita,Beppu   2009年02月05日  -  2009年02月07日

  • Three-dimensional human motion modeling by back projection based on image-based camera calibration 査読有り

    Masaoka S.,Tan J.K.,Kim H.S.,Ishikawa S.:

    Proc. of 14th Int. Sympo. on AROB   734 - 737   2009年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    OIta,Beppu   2009年02月05日  -  2009年02月07日

  • Three-dimensional human motion modeling by back projection based on image-based camera calibration 査読有り

    Masaoka S.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. of 14th Int. Sympo. on AROB   734 - 737   2009年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2009年02月05日  -  2009年02月07日

  • Entire shape recovery employing virtual see-through cameras 査読有り

    Mori K.,Sone T.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. of 14th Int. Sympo. on AROB   577 - 580   2009年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2009年02月05日  -  2009年02月07日

  • Action recognition with various speeds and timed-DMHI feature vectors 査読有り

    Md Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    The 11th International Conference on Computer and Information Technology(ICCIT2008)   2008年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Khulna   2008年12月25日  -  2008年12月27日

  • Performance analysis on an efficient human motion database with various motion representations 査読有り

    Ashik E.,Tan J.K.,Kim H.S.,Ishikawa S.

    The 11th International Conference on Computer and Information Technology(ICCIT2008)   207 - 212   2008年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Khulna   2008年12月25日  -  2008年12月27日

  • Solutions to motion self-occlusion problem in human activity analysis 査読有り

    Md Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    The 11th International Conference on Computer and Information Technology(ICCIT2008)   2008年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Khulna   2008年12月25日  -  2008年12月27日

  • Directional Motion History Templates for Low Resolution Motion Recognition 査読有り

    Md Atiqur Rahman Ahad,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    The 34th Annual Conference of the IEEE Industrial Electronics Society (IECON 2008)   2008年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    America   Orlando,Florida   2008年11月10日  -  2008年11月13日

  • Extraction of multi organs by use of level set method from CT images 査読有り

    Komatsu M.,Kim H.,Tan J.K.,Ishikawa S.,Yamamoto A.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1260 - 1263   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Human activity recognition: Various paradigms 査読有り

    Ahad M.A.R.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1896 - 1901   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Development of the MI-Viewer KIT for Medical Image Viewer 査読有り

    Kizuka T.,Kim H.,Tan J.K.,Ishikawa S.,Yamamoto A.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1272 - 1275   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Detection of blood vessels on CTA images of the legs 査読有り

    Kozono K.,Yamamoto A.,Itai Y.,Kim H.,Tan J.K.,Ishikawa S.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1276 - 1279   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Face direction estimation based on eigenspace technique 査読有り

    Okubo J.,Sugandi B.,Kim H.,Tan J.K.,Ishikawa S.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1264 - 1267   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Optimal registration method based on ICP algorithm from head CT and MR image sets 査読有り

    Harada K.,Kim H.,Tan J.K.,Ishikawa S.,Yamamoto A.

    Proc. Int. Conf. on Control, Automation and Systems 2008   1268 - 1271   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2008年10月14日  -  2008年10月17日

  • Template-based human motion recognition for complex activities 査読有り

    Ahad M.A.R.,Ogata T.,Tan J.K.,Kim,H.S.,Ishikawa S.

    Proc. 2008 IEEE Int. Conf. Systems, Man and Cybernetics   673 - 678   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2008年10月12日  -  2008年10月15日

  • Three-dimensional information projection system using a hand-held screen 査読有り

    Hashimoto S.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. 2008 IEEE Int. Conf. Systems, Man and Cybernetics   1385 - 1389   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2008年10月12日  -  2008年10月15日

  • Extracting moving objects from a video by sequential background detection employing a local correlation map 査読有り

    Miyoshi M.,Tan J.K.,Ishikawa S.

    Proc. 2008 IEEE Int. Conf. Systems, Man and Cybernetics   3365 - 3369   2008年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Singapore   Singapore   2008年10月12日  -  2008年10月15日

  • FBI法による非造影MR画像からの血管領域の抽出と3次元表示 査読有り

    山本晃義,西崎孝志,金 亨燮,タンジュークイ,石川聖二,中村克己

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   73 - 79   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 可変照明下における逐次更新型色ヒストグラムを用いた適応的色抽出とそれを用いた人物の追跡 査読有り

    上村浩文,タンジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   47 - 55   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    代表的研究業績

  • 胸部MDCT像からのスリガラス状陰影の自動抽出 査読有り

    金亨燮,板井善則,タンジュークイ,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   57 - 63   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 濃度特徴を用いた胸部CT画像からのGGOの自動抽出 査読有り

    勝間田淑史,板井善則,金 亨燮,タンジュークイ,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   81 - 86   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 包囲カメラ群を用いた立体の全周復元法 査読有り

    守啓祐,曽根俊昌,タンジュークイ,金亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   39 - 45   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    代表的研究業績

  • 腹部CT像からの血管領域の自動抽出法 査読有り

    小松昌史,金亨燮,石川聖二,タンジュークイ,山本晃義

    バイオメディカル・ファジイ・システム学会誌   10 ( 2 )   65 - 72   2008年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Motion recognition approach to solve overwriting in complex actions 査読有り

    Ahad M. A. R.,Ogata T.,Tan J. K.,Kim H. S.,Ishikawa S.

    Proc. 8th Int. Conf. Automatic Face and Gesture Recognition (IEEE FG2008)   ( CD-ROM )   6 pages   2008年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The Netherlands   Amsterdam   2008年09月17日  -  2008年09月19日

  • ICP法を用いた頭部CT・MR画像の位置合わせ 査読有り

    原田康平,金 亨燮,タン ジュークイ,石川聖二,山村雄太郎,山本晃義

    Medical Imaging Technology   26 ( 4 )   246 - 250   2008年09月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • A simple real-time approach for action separation into action primitives 査読有り

    Ahad M.A.R.,Uemura H.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. 1st Int.W. Tracking Humans for the Evaluation of heir Motion in Image Sequences (THEMIS’2008)   69 ( 78 )   2008年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    United Kingdom   Leeds   2008年09月05日  -  2008年09月05日

  • Moment-based human motion recognition from the representation of DMHI templates 査読有り

    Ahad M.A.R.,Ogata T.,Tan J. K.,Kim H.S.,Ishikawa S.

    Proc. of SICE Annual Conf.   578 - 583   2008年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    東京都,調布市   2008年08月20日  -  2008年08月22日

  • An efficient approach to human motion recognition employing large motion-database structure 査読有り

    Ashik E.,Tan J.K.,Kim H.S.,Ishikawa S.

    Proc. of SICE Annual Conf.   2239 - 2243   2008年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    東京都,調布市   2008年08月20日  -  2008年08月22日

  • A complex motion recognition technique employing directional motion templates 査読有り

    Ahad M.A.R.,Ogata T.,Tan J.K.,Kim H.S.,Ishikawa S.

    International Journal of Innovative Computing, Information and Control   4 ( 8 )   1943 - 1954   2008年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic detection of spinal deformity by use of density features from moire topographic images 査読有り

    Kim H. S.,Nakano S.,Tan J. K.,Ishikawa S.

    Proc. IADIS Int. Conf. Computer Graphics and Visualization 2008   239 - 243   2008年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The Netherlands   Amsterdam   2008年07月24日  -  2008年07月26日

  • Real time object tracking and identification using a camera 査読有り

    Sugandi B.,Kim H.,Tan J.K.,Ishikawa S.

    The 23rd Int. Tech. Conf. Circuits/Systems, Computers and Communications   1585 - 1588   2008年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    下関   2008年07月06日  -  2008年07月09日

  • Automatic detection of multi organs on the CT images using the ribs information and a level set method 査読有り

    Komatsu M.,Toyota S.,Kim H.,Tan J.K.,Ishikawa S.,Yamamoto A.

    The 23rd Int. Tech. Conf. Circuits/Systems, Computers and Communications   5 - 12   2008年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    下関   2008年07月06日  -  2008年07月09日

  • A method for reduction of computational time on multi-modal image registration employing wavelet transformation from a set of head CT and MR imaging 査読有り

    Yamamura Y.,Kim H. S.,Tan J. K.,Ishikawa S.,Yamamoto A.

    International Journal of Innovative Computing, Information and Control   4 ( 7 )   1527 - 1536   2008年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • A multi modal image registration method by use of optimal retrieval on the VOIs 査読有り

    Yamamura Y.,Kim H. S.,Tan J. K.,Ishikawa S.,Yamamoto A.

    Int. Conf. Innovative Computing, Information and Control   ( 4 pages )   CD-ROM   2008年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2008年06月18日  -  2008年06月20日

  • Automatic detection of GGO candidate regions by using artificial neural networks from thoracic MDCT 査読有り

    Katsumata Y.,Itai Y.,Kim H.,Tan J. K.,Ishikawa S.

    Int. Conf. Innovative Computing, Information and Control   ( 5 pages )   CD-ROM   2008年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Dalian   2008年06月18日  -  2008年06月20日

  • A block matching technique for object tracking employing peripheral increment sign correlation image 査読有り

    Sugandi B.,Kim H.,Tan J.K.,Ishikawa S.

    Proc. Int. Conf. Computer and Communication Engineering 2008   113 - 117   2008年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Malaysia   Kuala Lumpur   2008年05月13日  -  2008年05月15日

  • Complex motion separation and recognition using directional motion templates 査読有り

    Ahad M.A.R.,Ogata T.,Tan J.K.,Kim H.S.,Ishikawa,S.

    Proc. of Int. W. on Combinatorial Image Processing   73 - 82   2008年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    America   New York   2008年04月07日  -  2008年04月09日

  • Extracting mobile objects by sequential background detection on a video 査読有り

    Sakai Y.,Miyoshi M.,Tan J. K.,Ishikawa S.

    Artificial Life & Robotics, Springer Tokyo.   13 ( 1 )   302 - 305   2008年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Motion capture employing an uncalibrated camera 査読有り

    Tan J.K.,Ishikawa S.,Kouno K.,Ohbuchi H.,Kim H.S.

    Artificial Life & Robotics, Springer Tokyo (2008).   13 ( 1 )   311 - 314   2008年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Human motion representation and recognition by directional motion history images 査読有り

    Fukumoto M.,Tan J.K.,Kim H.S.,Ishikawa S.

    Artificial Life & Robotics, Springer Tokyo   13 ( 1 )   326 - 330   2008年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • View-based human motion recognition in the presence of outliers 査読有り

    Md Atiqur Rahman Ahad,Takehito Ogata,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    International Journal of Biomedical Soft Computing and Human Sciences   13 ( 1 )   71 - 78   2008年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Motion Capture Employing an Uncalibrated Camera 査読有り

    J. K. Tan,S. Ishikawa,K. Kouno,H. Ohbuchi,H.S. Kim

    The 13th International Symposium on Artificial Life and Robotics   191 - 194   2008年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府,大分   2008年01月31日  -  2008年02月02日

  • Extracting moving objects from a video by sequential background detection 査読有り

    Yasuaki Sakai,Makoto Miyoshi,Joo Kooi Tan,Seiji Ishikawa

    The 13th International Symposium on Artificial Life and Robotics   219 - 222   2008年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府,大分   2008年01月31日  -  2008年02月02日

  • A human-machine cooperative interface of a virtual 3-D space using hand gestures 査読有り

    Mori K.,Tan J.K.,Ishikawa S

    The 13th International Symposium on Artificial Life and Robotics   2008年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府,大分   2008年01月31日  -  2008年02月02日

  • Automatic detection of blood vessels CTA images employing morphological operation 査読有り

    Itai Y.,Yamamoto A.,Kim H.S.,Tan J.K.,Ishikawa S.

    The 13th International Symposium on Artificial Life and Robotics   215 - 218   2008年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府,大分   2008年01月31日  -  2008年02月02日

  • Human Motion Representation and Recognition by Directional Motion History Images 査読有り

    Masayuki Fukumoto,Takehito Ogata,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    The 13th International Symposium on Artificial Life and Robotics   236 - 239   2008年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府,大分   2008年01月31日  -  2008年02月02日

  • 左右非対称の特徴を用いたモアレ画像からの脊柱側彎症自動識別 査読有り

    金亨燮,タンジュークイ,石川聖二,大塚嘉則,清水直,四宮孝史

    画像電子学会   37 ( 1 )   57 - 62   2008年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 高速な3次元物体復元法の開発 査読有り

    松田淳平,タン ジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 1 )   58 - 64   2008年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • MRRF法を用いた複雑背景下における人物の姿勢認識 査読有り

    大場統,タン ジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 1 )   51 - 57   2008年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • アフィンカメラモデルに基づく逆投影を用いた物体の立体モデル化 査読有り

    正岡哲,タン ジュークイ,金 亨燮,石川聖二

    バイオメディカル・ファジイ・システム学会誌   10 ( 1 )   122 - 125   2008年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Tracking of Moving Persons using Multi-camera Employing Peripheral Increment Sign Correlation Image 査読有り

    Budi Sugandi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    ICIC Express Letters, An International Journal of Research and Surveys   1 ( 2 )   177 - 184   2007年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Comparative analysis between two view-based methods: MHI and DMHI 査読有り

    Md Atiqur Rahman Ahad,Ogata Takehito,Tan Joo Kooi,Kim HyoungSeop,Ishikawa Seiji

    Proc. of ICCIT   2007年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Bangladesh   Dhaka   2007年12月25日  -  2007年12月25日

  • A smart automated complex motion recognition technique 査読有り

    Md Atiqur Rahman Ahad,Ogata Takehito,Tan Joo Kooi,Kim HyoungSeop,Ishikawa Seiji

    Proceedings of ACCV Workshop   142 - 149   2007年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Tokyo   2007年11月18日  -  2007年11月22日

  • Performance of Multi-directional MHI for Human Motion Recognition in the Presence of Outliers 査読有り

    Md Atiqur Rahman Ahad,Ogata Takehito,Tan Joo Kooi,Kim HyoungSeop,Ishikawa Seiji

    Proceedings of IEEE IECON 2007   2366 - 2370   2007年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2007年11月05日  -  2007年11月08日

  • High Accuracy and real time recognition of a human activity system 査読有り

    Tan Joo Kooi,Ishikawa Seiji

    Proceedings of IEEE IECON 2007   2377 - 2382   2007年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2007年11月05日  -  2007年11月08日

    主要論文集(会議) 代表的研究業績

  • Automatic detection of ground glass opacity from the thoracic MDCT images by using density features 査読有り

    Hyoungseop Kim,Tooru Nakashima,Yoshinori Itai,Shinya Maeda,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of International Conference on Control, Automation and Systems   1274 - 1277   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

  • Recognizing Human Postures in a Complex Background Employing MRRF Method 査読有り

    Osamu Oba,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proceedings of International Conference on Control, Automation and Systems   1159 - 1162   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

    主要論文集(会議)

  • Automatic detection of GGO candidate regions employing four statistical features on thoracic MDCT image 査読有り

    Yoshifumi Katsumata,Yoshinori Itai,Shinya Maeda,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of International Conference on Control, Automation and Systems   1278 - 1281   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

  • Development of a Fast 3-D Object Reconstruction Technique 査読有り

    Junpei Matsuda,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proceedings of International Conference on Control, Automation and Systems   2303 - 2306   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

    主要論文集(会議)

  • An automatic detection method of spinal deformity from moire topographic images employing asymmetric degree of shoulder and waistline 査読有り

    Toyoaki Tanoue,Satoshi Nakano,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Yoshinori Otsuka,Hisashi Shimizu,Takashi Shinomiya

    Proceedings of International Conference on Control, Automation and Systems   1282 - 1285   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

  • Development of an autonomous robot for face tracking 査読有り

    Yuji Nishina,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proceedings of International Conference on Control, Automation and Systems   1178 - 1181   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

    主要論文集(会議)

  • A method for reducing of computational time on image registration employing wavelet transformation 査読有り

    Yutaro Yamamura,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa,Akiyoshi Yamamoto

    Proceedings of International Conference on Control, Automation and Systems   1286 - 1291   2007年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2007年10月17日  -  2007年10月20日

  • A study on the operation of a virtual 3-D space using hand gestures 査読有り

    Kesuke Mori,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of The First International Symposium on Information and Computer Elements   285 - 290   2007年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年09月12日  -  2007年09月14日

  • Tracking of moving persons using multi camera under the network environment 査読有り

    Budi Sugandi,Takuji Yamamoto,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa:

    Proceedings of The First International Symposium on Information and Computer Elements   309 - 314   2007年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年09月12日  -  2007年09月14日

  • Directional motion templates: new appearance-based recognition method 査読有り

    Md Atiqur Rahman Ahad,Takehito Ogata,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proceedings of The First International Symposium on Information and Computer Elements,   303 - 308   2007年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年09月12日  -  2007年09月14日

  • Tracking of Moving Objects by Using a Low Resolution Image 査読有り

    Budi Sugandi,Hyoungseop Kim,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of International Conference on Innovative Computing, Information and Control   CD-ROM   2007年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年09月05日  -  2007年09月07日

  • Robust human motion recognition based on multi-directional motion representation 査読有り

    Md Atiqur Rahman Ahad,Takehito Ogata,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa:

    Proceedings of International Conference on Innovative Computing, Information and Control   CD-ROM   2007年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年09月05日  -  2007年09月07日

  • モーションデータベースを用いた人の動作の高速認識 査読有り

    タン ジュークイ,河野 憲亮,石川 聖二,金 亨燮,四宮 孝史

    画像電子学会誌   36 ( 5 )   738 - 746   2007年09月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

    Kyutacar

  • A media-art employing virtual shadows with shape recognition 査読有り

    Sunao Hashimoto,Joo Kooi Tan,Hyoungseop Kim,Seiji Ishikawa

    Proceedings of The Second International Conference on Innovative Computing, Information and Control   CD-ROM   2007年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年08月05日  -  2007年08月07日

    主要論文集(会議)

  • Ground-Glass Opacity Detection by using Correlation between Successive Slice Imagess 査読有り

    H. Kim,M. Maekado,J. K. Tan,S. Ishikawa,M. Tsukuda

    International Jpurnal on Artificial Intelligence Tools.--Special Issue on selected papers from 17th IEEE International Conference(ICTAI-2005)   16 ( 4 )   583 - 592   2007年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • 三次元道路地図製作のためのグラフィカルな対話システムの開発 査読有り

    藤 耕平,入江 徹,タン ジュークイ,石川 聖二

    バイオメディカル・ファジイ・システム学会誌   9 ( 1 )   1 - 10   2007年07月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 複合現実感と3次元モデル化を用いた遮蔽空間の可視化 査読有り

    橋本 直,タン ジュークイ,金 亨燮,石川 聖二

    バイオメディカル・ファジイ・システム学会誌   9 ( 1 )   43 - 51   2007年07月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    代表的研究業績

  • 経時差分法を用いた腹部CT画像上の血管領域の抽出 査読有り

    小松昌史,金 亨燮,山本晃義,タン ジュークイ,石川聖二

    バイオメディカル・ファジイ・システム学会   9 ( 1 )   11 - 17   2007年07月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Automatic classification of spinal deformity by using four symmetrical features on the moire images 査読有り

    Kim H. S.,Nakano S.,Tan J. K.,Ishikawa S.,Otsuka Y.,Shimizu H.,Shinomiya T.

    Proceedings of 3rd International Workshop on Artificial Neural Networks and Intelligent Information Processing   99 - 106   2007年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    France   Angers   2007年05月  -  2007年05月

  • 影を使ったメディア・アートの開発

    橋本直,次村浩一,松田淳平,タンジュークイ,石川聖二

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   71 - 74   2007年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • 立体モデル化と複合現実感の技法を用いた遮蔽空間の可視化

    石川聖二,タンジュークイ,金亨燮,橋本直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   27 - 30   2007年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • A graphic system for creating a 3-D road environment 査読有り

    Tou K.,Irie T.,Tan J. K.,Ishikawa S.

    Journal of Biomedical Soft Computing and Human Sciences   12 ( 1 )   39 - 44   2007年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • ブースティングとモーションディスクリプタの組合せによる人の動作検出 査読有り

    緒方健人,ウィリアム クリスマス,ジョセフ キットラー,タンジュークイ,石川聖二

    情報処理学会論文誌   48 ( 3 )   1166 - 1175   2007年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌

  • An interactive graphic system for creating a 3-D road environment 査読有り

    Tou K.,Irie T.,Tan J. K.,Ishikawa S.

    Biomedical Soft Computing and Human Sciences   12 ( 1 )   39 - 44   2007年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • An interactive graphic system for creating a 3-D road environment 査読有り

    Tou K.,Irie T.,Tan J. K.,Ishikawa S.

    Journal of Biomedical Soft Computing and Human Sciences   12 ( 1 )   39 - 44   2007年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Development of a robot following human using color information 査読有り

    Kouji Tsugimura,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of Artificial Life and Robotics   CD-ROM   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年01月25日  -  2007年01月27日

    主要論文集(会議)

  • An interactive system for creating a 3-D graphical road map 査読有り

    Kouhei Tou,Touru Irie,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of Artificial Life and Robotics   CD-ROM   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2007年01月25日  -  2007年01月27日

  • Human Activity Recognition Independent of Observation Orientation 査読有り

    Tan Joo Kooi,Ishikawa Seiji,Kouno Kensuke

    Proceedings of The International Workshop on Advanced Image Technology   655 - 660   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Thailand   Bangkok   2007年01月08日  -  2007年01月09日

  • Human Motion Recognition Based on Directional Motion History Images 査読有り

    Takehito Ogata,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of The International Workshop on Advanced Image Technology   857 - 862   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Thailand   Bangkok   2007年01月08日  -  2007年01月09日

    主要論文集(会議)

  • A method of allocating video road data for producing a three-dimensional map 査読有り

    Kouhei Tou,Toru Irie,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of The International Workshop on Advanced Image Technology   429 - 434   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Thailand   Bangkok   2007年01月08日  -  2007年01月09日

  • Human motion recovery employing a motion database 査読有り

    Hirofumi Ohbuchi,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of The International Workshop on Advanced Image Technology   535 - 540   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Thailand   Bangkok   2007年01月08日  -  2007年01月08日

  • A method of creating a transparent space based on mixed reality 査読有り

    Hashimoto Sunao,Tan Joo Kooi,Ishikawa Seiji

    Journal of Artificial Life and Robotics   11 ( 1 )   57 - 60   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • 3-D modeling of remote dynamic situations employing mobile streo camera 査読有り

    Yamaguchi Iku,Tan Joo Kooi,Ishikawa Seiji

    Journal of Artificial Life and Robotics   11 ( 1 )   96 - 100   2007年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic Judgment of Spinal Deformity Based on Bach Propagation on Neural Network 査読有り

    Hyoung Seop Kim,Joo Kooi Tan,Seiji Ishikawa,Marzuki Khalid,Yoshinori Otsuka,Hisashi Shimizu,Takasi Shinomiya

    International Journal of Innovative Computing, Information and Control (IJICIC)   2 ( 6 )   1271 - 1279   2006年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • A Robust Color Tracker Employing a Two-dimensional Color Histogram with an Anisotropy Asymmetric Gaussian Distribution Model 査読有り

    Hirofumi Uemura,Joo Kooi Tan,Seiji Ishikawa:

    TENCON 2006   CD-ROM   2006年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   HongKong   2006年11月14日  -  2006年11月17日

    主要論文集(会議)

  • Tracking an object using an update type two-dimensional color histogram 査読有り

    Uemura H.,Tan J. K.,Ishikawa S.

    Proc. of AROB 2006   CD-ROM   2006年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    HongKong   2006年11月14日  -  2006年11月17日

  • A Color Tracker Employing a Two- dimensional Color Histogram Under Changeable Illumination 査読有り

    Hirofumi Uemura,Joo Kooi Tan,Seiji Ishikawa

    The 32nd Annual Conference of the IEEE Industrial Electronics Society (IECON-2006)   3273 - 3278   2006年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    France   Paris   2006年11月07日  -  2006年11月10日

    主要論文集(会議)

  • High-speed Human Motion Recovery Employing Back Projection 査読有り

    Uchinoumi M.,Tan J. K.,Ishikawa S.,Naito T,Yokota M

    Journal of Artificial Life and Robotics   10 ( 2 )   112 - 115   2006年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • Recovering Human Activity by Using Eigenspace 査読有り

    Hirofumi Ohbuchi,Joo Kooi Tan,Seiji Ishikawa

    SICE-ICCAS International Joint Conference 2006   2698 - 2701   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

  • Detection of Difference on a Road by Geometric Analysis of a Scene 査読有り

    Aoyagi,Joo Kooi Tan,Seiji Ishikawa

    SICE-ICCAS International Joint Conference 2006   1156 - 1159   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

  • High-speed Data Retrieval in an Eigenspace Employing a B-tree Structure 査読有り

    Kensuke Kouno,Joo Kooi Tan,Seiji Ishikawa

    SICE-ICCAS International Joint Conference 2006   2717 - 2720   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

    主要論文集(会議)

  • Producing a Three-Dimensional Road Map for Car Navigation 査読有り

    Kouhei Tou,Toru Irie,Joo Kooi Tan,Seiji Ishikawa

    SICE-ICCAS International Joint Conference 2006   4776 - 4779   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

  • A Color Tracker employing a Two-dimensionalColor Histogram Under Unstable Illumination 査読有り

    Hirofumi Uemura,Joo Kooi Tan,Seiji Ishikawa

    SICE-ICCAS International Joint Conference 2006   2725 - 2728   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

  • Extracting a Human Area by Background Detection 査読有り

    Yasuaki Sakai,Joo Kooi Tan,Seiji Ishikawa

    Proc. of SICE-ICCAS International Joint Conference 2006   2296 - 2299   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Busan   2006年10月18日  -  2006年10月21日

    主要論文集(会議)

  • A Mobile Motion Capture Technique Excelling in 3-D Modeling of Temporary Events 査読有り

    Yamaguchi I.,Tan J. K.,Ishikawa S.

    2006 IEEE Int., Con. On System, Man and Cybernetics Society   1613 - 1617   2006年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Taiwan   Taipei   2006年10月08日  -  2006年10月11日

  • Spinal Deformity Detection from Moire Topographic Image Based on Evaluating Asymmetric Degree 査読有り

    Hyoung Seop Kim,Hideki Ushijima,Joo Kooi Tan,Seiji Ishikawa,Yoshinori Otsuka,Hisashi Shimizu,Takasi Shinomiya

    World Congress on Medical Physics and Biomedical Engineering 2006悠maging the future medicine(COEX)   2192 - 2195   2006年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Korea   Seoul   2006年08月27日  -  2006年09月01日

  • Analysis and evaluation of human gaits employing motion capture system 査読有り

    Ni Ran,Noda K.,Tan J. K.,Ishikawa S

    Proc. of 2006 Int. Conf. on Artificial Intelligence   CD-R   2006年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Beijing   2006年08月  -  2006年08月

    主要論文集(会議)

  • 3-D Recovery of Human Motions by Mobile Cameras 査読有り

    Tan J. K.,Ishikawa S.,Yamaguchi I.

    Journal of Artificial Life and Robotics   10 ( 1 )   64 - 68   2006年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • MRの技法を用いた実空間への情報融合とその応用―第III期報告

    石川聖二,タンジュークイ,山口生,橋本直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報第10号   47 - 50   2006年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • スライス間相関を用いた胸部CT画像上の異常陰影の抽出 査読有り

    金亨燮,前門雅岐,タンジュークイ,石川聖二

    画像電子学会誌   2006年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • MRの技法を用いた実空間への情報融合とその応用―第Ⅲ期報告

    石川聖二,タンジュークイ,山口生,橋本 直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   47 - 50   2006年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • High-Speed Human Motion Recognition Based on a Motion History Image and an Eigenspace 査読有り

    Takehito Ogata,Joo Kooi Tan,Seiji Ishikawa

    IEICE Transactions on Information System   E89-D ( 1 )   281 - 289   2006年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

    Kyutacar

  • Asymmetry analysis of human motions for examining rehabilitation training 査読有り

    K Noda,JK Tan,S Ishikawa,N Kito,K Okumura

    Proceedings of 11th International Symposium on Artificial Life & Robotics   CD-R   2006年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府市   2006年01月  -  2006年01月

    代表的研究業績

  • 3-D modeling of remote dynamic situations using mobile stereo cameras 査読有り

    I Yamaguchi,JK Tan,S Ishikawa

    Proceedings of 11th International Symposium on Artificial Life and Robotics   CD-R   2006年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府   2006年01月  -  2006年01月

  • Tracking an object using an update type two-dimensional 査読有り

    H Uemura,JK Tan,S Ishikawa

    Proceedings of 11th International Symposium on Artificial Life & Robotics   CD-R   2006年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府市   2006年01月  -  2006年01月

    主要論文集(会議) 代表的研究業績

  • Visualization of an invisible space based on the MR technique 査読有り

    S Hashimoto,JK Tan,S Ishikawa

    Proceedings of 11th International Symposium on Artificial Life & Robotics   CD-R   2006年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    別府市   2006年01月  -  2006年01月

  • A mobile motion capture system employing image transfer 査読有り

    I Yamaguchi,JK Tan,S Ishikawa

    Proceedings of TENCON2005   CD-R   2005年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Australia   Melbourne   2005年11月24日  -  2005年11月26日

    代表的研究業績

  • Human motion representation using eigenspace 査読有り

    JK Tan,S Houman,S Ishikawa

    Proceedings of TENCON2005   CD-R   2005年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Australia   Melbourne   2005年11月21日  -  2005年11月24日

    主要論文集(会議) 代表的研究業績

  • Automatic extraction of ground-glass opacity shadows on CT images of the thorax by correlation between successive slices 査読有り

    HS Kim,M Maekado,JK Tan,S Ishikawa,M Tsukuda

    Proceedings of International Conference on Tools with Artificial Intelligence   607 - 612   2005年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    China   Hong Kong   2005年11月14日  -  2005年11月16日

  • 可動ステレオカメラを用いた動的シーンの3次元モデル化法 査読有り

    山口 生,タンジュークイ,石川聖二

    バイオメディカル・ファジィ・システム学会誌   7 ( 1 )   113 - 121   2005年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    代表的研究業績

  • Spinal deformity detection employing back propagation on neural network 査読有り

    HS Kim,JK Tan,S Ishikawa,M Khalid,M Viergever,Y Otsuka,T Shinomiya

    Proceedings of International Conference on Advances in Pattern Recognition   719 - 725   2005年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    UK   Bath   2005年08月22日  -  2005年08月25日

  • Creating transparent entities employing an MR technique 査読有り

    Sunao Hashimoto,Joo Kooi Tan,Seiji Ishikawa

    Proceedings of SICE Annual Conference   CD-R   2005年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    岡山   2005年08月08日  -  2005年08月10日

  • MRの技法を用いた実空間への情報融合とその応用―遮蔽された空間の透過表示

    石川聖二,タンジュークイ,山口生,安倍晋介,橋本直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報,第9号,   39 - 44   2005年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • On describing human motions in an eigenspace 査読有り

    Homan S.,Ogata T.,Tan J. K.,Ishikawa S

    Proceedings of IAPR Conference on Machine Vision Applications   CD-R(27-30)   2005年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Tsukuba   2005年05月16日  -  2005年05月18日

    代表的研究業績

  • MRの技法を用いた実空間への情報融合とその応用―遮蔽領域の透過表示法の開発

    石川聖二,タンジュークイ,山口生,安倍晋介,橋本直

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   35 - 38   2005年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • A high-speed human motion recovery based on back projection 査読有り

    Uchinoumi M.,Tan J. K.,Ishikawa S.,Naito T.,Yokota M.

    Proceedings of the Internatinal Symposium on Artificial Life and Robotics   CD-ROM GS25-3   2005年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)

    Oita   2005年02月04日  -  2005年02月06日

  • Human motion recovery by mobile stereoscopic cameras 査読有り

    Tan J. K.,Yamaguchi I.,Ishikawa S.,Naito T.,Yokota M.

    Proceedings of the Internation Symposium on Artificial Life and Robotics   CD-ROM GS25-4   2005年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Oita   2005年02月04日  -  2005年02月06日

    主要論文集(会議)

  • Various Image Taking Strategies for 3-D Object Modeling Based on Multiple Cameras 査読有り

    Tan J. K.,Yamaguchi I.,Ishikawa S.

    Proceedings of 2004 IEEE International Conference on Image Processing   ( 2487-2490 )   4 - 4   2004年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    シンガポール   シンガポール   2004年10月24日  -  2004年10月27日

    主要論文集(会議) 代表的研究業績

  • A Simple-Structured Real Time Motion Capture System Employing Silhouette Images 査読有り

    Uchinoumi M.,Tan J. K.,Ishikawa S.

    Proceedings of 2004 the IEEE International Conference on Systems, Man and Cybernetics   3094 - 3098   2004年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The Netherlands   Hague,Netherland   2004年10月10日  -  2004年10月13日

    主要論文集(会議)

  • Real Time Human Motion Recognition by an Aerial Robot 査読有り

    Ogata T.,Matsuda S.,Tan J. K.,Ishikawa S.

    Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics   ( 5290-5295 )   6 - 6   2004年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The Netherlands   Hague(Netherlands)   2004年10月10日  -  2004年10月13日

    主要論文集(会議)

  • 移動カメラを用いた広範囲に移動する動作の3次元計測 査読有り

    田房 友典,タンジュークイ,石川 聖二,横田 臣博

    バイオメディカルファジィシステム学会誌   6 ( 1 )   32 - 41   2004年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Representing human motions in an eigenspace based on a surrounding camera system 査読有り

    Houman S.,M.M. Rahman,Tan J. K.,Ishikawa S.

    Proc. of 2004 IEEE Int. Conference on Control, Automation and Systems   1808 - 1813   2004年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    タイ   Bangkok   2004年08月25日  -  2004年08月27日

  • Real Time Human Motion Recognition Based on a Motion History Image and an Eigenspace 査読有り

    Ogata T.,Rahman M. M.,Tan J. K.,Ishikawa S

    Proceedings of 43th SICE Annual Conference   1901 - 1904   2004年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    札幌   2004年08月04日  -  2004年08月06日

  • Automatic Generation of Virtual Human Motions for an Interactive MR System 査読有り

    Yamaguchi I.,Tan J. K.,Ishikawa S.

    Proc. of 43th SICE Annual Conference   1044 - 1047   2004年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    札幌   2004年08月04日  -  2004年08月06日

  • 多視点情報を用いた人物の動作表現と認識法の開発

    寶満聡,ラハマンマスドゥル,タンジュークイ,石川聖二

    北九州医工学術者協会誌   15   75 - 76   2004年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • MRの技法を用いた実空間への情報融合とその応用― モーションキャプチャを用いた仮想人モデルの生成

    石川聖二,タンジュークイ,山口 生,藤耕平,安倍晋介

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 8 )   41 - 44   2004年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • 飛行船による人物の実時間動作認識と追跡システムの開発

    石川聖二,タンジュークイ,緒方健人,田脇 一,松田信一

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 8 )   194 - 197   2004年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • Shape recovery of non-rigid objects by multiple mobile cameras

    Tan J. K.,Yamaguchi I.,Ishikawa S.

    電子情報通信学会技術研究報告   85 - 90   2004年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)

  • 飛行船による人物の実時間動作認識と追跡システムの開発

    緒方健人,松田信一,田脇 一,タンジュークイ,石川聖二

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   50 - 53   2004年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • MRの技法を用いた実空間への情報融合とその応用―モーションキャプチャを用いた仮想人モデルの生成

    石川聖二,タンジュークイ,山口生,藤耕平,安倍普介

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究進捗状況発表会資料   24 - 27   2004年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • Generating a virtual human model for a mixed reality space 査読有り

    Yamaguchi I.,Tan J. K.,Ishikawa S.

    2004 International Symposium on Artificial Life and Robotics   29 - 32   2004年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    大分   2004年01月28日  -  2004年01月30日

  • Realizing a mixed reality space guided by a virtual human ― Creating a virtual human from incomplete 3-D motion data 査読有り

    Abe S.,Yamaguchi I.,Tan J. K.,Ishikawa S.

    International Conference on Control, Automation, and Systems   1625 - 1628   2003年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    韓国   Gyeongju   2003年10月22日  -  2003年10月25日

  • On a multi-agent system for assisting human intention 査読有り

    Tawaki H.,Tan J. K.,Kim H. Y.,Ishikawa S.

    International Conference on Control, Automation, and Systems   1126 - 1129   2003年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    韓国   Gyeongju   2003年10月22日  -  2003年10月25日

    主要論文集(会議)

  • An aerial robot system tracking a moving object 査読有り

    Ogata T.,Tan J. K.,Ishikawa S.

    International Conference on Control, Automation, and Systems   1917 - 1920   2003年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    韓国   Gyeongju   2003年10月22日  -  2003年10月25日

    主要論文集(会議)

  • Generating 3-D models of human motions by motion capture 査読有り

    Yamaguchi I.,Tou K.,Tan J. K.,Ishikawa S.,Naito T.,Yokota M.

    International Conference on Control, Automation, and Systems   1629 - 1632   2003年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    韓国   Gyeongju   2003年10月22日  -  2003年10月25日

  • Development of a Motion Capture System Based on Uncalibrated Cameras 査読有り

    Tou K.,Tan J. K.,Ishikawa S.,Naito T.,Yokota M.

    Proc. of 42st SICE Annual Conference   1936 - 1939   2003年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Fukui   2003年08月04日  -  2003年08月06日

  • 多視点画像を用いた変形物体の一括3次元モデル化法 査読有り

    タン ジュークイ,石川 聖二,廣川 俊二

    画像電子学会誌   32 ( 4 )   483 - 487   2003年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

    Kyutacar

  • 広範囲の動作を行う人の3次元モデル化と解析法

    田房友典,タンジュークイ,石川聖二

    北九州医工学術者協会誌   14 ( 1 )   13 - 14   2003年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • モーションキャプチャにより生成した仮想人間の実空間への融合について

    石川聖二,タンジュークイ,山口生

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 7 )   120 - 121   2003年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • VRによる融合空間の生成と応用― 第Ⅱ期報告

    石川聖二,金 亨燮,タンジュークイ,廣橋大輔,山口生

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 7 )   18 - 23   2003年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • Development of a Remote Motion Capture System Employing Wireless LAN 査読有り

    Tou K.,Tan J. K.,Ishikawa S.,Naito T.,Yokota M.

    The First Malaysia-France Regional Workshop on Image Processing in Vision Systems and Multimedia Communications   CD-ROM   CD-ROM   2003年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    マレーシア   Sarawak   2003年04月21日  -  2003年04月22日

  • Recovering Wide Range Human Motions By a Mobile Cameras System 査読有り

    Tabusa T.,Tan J. K.,Ishikawa S.,Hirokawa S.

    The First Malaysia-France Regional Workshop on Image Processing in Vision Systems and Multimedia Communications   CD-ROM   CD-ROM   2003年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    マレーシア   Sarawak   2003年04月21日  -  2003年04月22日

  • VRによる融合空間の生成と応用―第2期報告

    石川聖二,タンジュークイ,金亨燮,広橋大輔 山口生

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー研究成果報告資料   65 - 70   2003年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • 移動ビデオカメラ群による3次元物体復元システムの開発

    タン ジュー クイ

    実吉奨学会平成13年度研究助成報告集   20   2003年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

    研究助成報告集

  • Unifying a Virtual Human from Motion Capture Data into Real Space 査読有り

    Tan J. K.,Yamaguchi I.,Ishikawa S.

    International Conference on Robotics, Vision, Information, and Signal Processing   602 - 605   2003年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    マレーシア   Penang   2003年01月22日  -  2003年01月24日

    主要論文集(会議)

  • A remote capture system based on mobile cameras 査読有り

    Tan J. K.,Tou K.,Ishikawa S.

    Proc of IAPR Workshop on Machine Vision Applications   205 - 208   2002年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2002年12月11日  -  2002年12月13日

    主要論文集(会議)

  • Recovering and Analyzing Wide Range Human Motions Based on Mobile Camera 査読有り

    Tabusa T.,Tan J. K.,Ishikawa S.

    Proc. of IAPR Workshop on Machine Vision Applications   162 - 163   2002年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2002年12月11日  -  2002年12月13日

  • On recovering a wide range of human motions 査読有り

    Tabusa T.,Tan J. K.,Ishikawa S.

    Proc. of 41st SICE Annual Conference   1502 - 1503   2002年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Osaka   2002年08月05日  -  2002年08月07日

  • On recovering shape of non rigid objects employing a remote motion capture system 査読有り

    Tan J. K.,Uchinoumi M.,Ishikawa S.

    Proc. of 41st SICE Annual Conference   1308 - 1313   2002年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Osaka   2002年08月05日  -  2002年08月07日

  • Modeling a 3-D Non-rigid Virtual Object by Motion Capture and Its Appearance in the Real Space 査読有り

    Hirohashi D.,Tan J. K.,Ishikawa S.

    Proc. of 41 st SICE Annual Conference   3128 - 3131   2002年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Osaka   2002年08月05日  -  2002年08月07日

  • Creating a virtual human from motion capture data

    Hirohashi D.,Tan J. K.,Ishikawa S.,Marzuki K.

    サテライトベンチャービジネスラボラトリー研究報告   ( 6 )   70 - 73   2002年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • 3-D Deformable Objects Modeling Employing Mobile Cameras

    Tan J. K.,Ishikawa S.,Marzuki K.

    サテライトベンチャービジネスラボラトリー年報   ( 6 )   104 - 107   2002年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(大学,研究機関等紀要)

  • 3-D human motions recovery by uncalibrated mobile video cameras 査読有り

    Tan J. K.,Tabusa T.,Ishikawa S.

    Proc. of the International Conf. Artificial Intelligence in Engineering Technology   515 - 520   2002年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    マレーシア   Kota Kinabalu (Sabah)   2002年06月17日  -  2002年06月18日

  • 広範囲の運動の三次元復元と解析法 査読有り

    田房 友典,タンジュークイ,石川 聖二:“

    画像電子学会誌   31 ( 6 )   1175 - 1182   2002年06月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • ACL線維束の変形・ひずみ挙動の3次元動画像構築に関する研究 査読有り

    廣川俊二,山本耕之,石川聖二,タン ジュークイ,鶴野 玲治

    日本臨床バイオメカニクス学会誌   23   47 - 52   2002年04月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Three-dimensional human motion recovery employing two mobile cameras

    Tan J. K.,Ishikawa S.

    電子情報通信学会技術研究報告   MBE-101 ( 733 )   67 - 72   2002年03月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • A method of 3-D object modeling employing mobile cameras

    Tan J. K.,Ishikawa S.,Marzuki K.

    九州工業大学サテライトベンチャービジネスラボラトリー, 研究成果報告会資料集   65 - 70   2002年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(大学,研究機関等紀要)

  • Recovering human motions by mobile cameras and factorization 査読有り

    Tabusa T.,Tan J. K.,Ishikawa,S.

    Proc. of the 5th Asian Conference on Computer Vision   CD-ROM   2002年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    オーストリア   Melbourne   2002年01月22日  -  2002年01月25日

  • Recovering and analyzing 3-D motion of team sports employing uncalibrated video cameras 査読有り

    Tan,J. K.,Ishikawa,S.

    IEICE Transactions on Information System   E84-D ( 12 )   1728 - 1732   2001年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

    Kyutacar

  • Producing a virtual object with realistic motion for a mixed reality space 査読有り

    Hirohashi D.,Tan J. K.,Ishikawa S.

    Proc. of International Conference on Control, Automation, and System   1084 - 1087   2001年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    韓国   Cheju   2001年10月17日  -  2001年10月21日

  • Shape recovery of non-rigid objects employing factorization-based stereo cameras 査読有り

    Tan J. K.,Ishikawa S.

    Proc. of 2001 IEEE International Conference on Image Processing   145 - 148   2001年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    ギリシャ   Thessaloniki(Greece)   2001年10月07日  -  2001年10月11日

  • On 3-D human motion recovery by mobile video cameras 査読有り

    Tan J. K.,Ishikawa S.,Hirokawa S.

    Proc. of International Symposium on Measurement, Analysis and Modeling of Human Functions   487 - 491   2001年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Sapporo   2001年09月21日  -  2001年09月23日

  • Three-Dimensional image reconstruction of the human ligaments 査読有り

    Hirokawa S.,Yamamoto K.,Ishikawa S.,Tan J. K.

    Proc. of International Symposium on Measurement, Analysis and Modeling of Human Functions   463 - 468   2001年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2001年09月21日  -  2001年09月23日

  • 被写体とともに移動するカメラを用いた3次元運動のモデル化

    田房友典,タン ジュークイ,石川聖二

    北九州医工学術者協会誌   13 ( 1 )   7 - 8   2001年09月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • Human motion recovery by the factorization based on a spatiotemporal measurement matrix 査読有り

    Tan J. K.,Ishikawa S.

    Computer Vision and Image Understanding   82 ( 2 )   101 - 109   2001年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    主要雑誌 代表的研究業績

  • 複数のカメラによる人の運動の3次元復元

    タンジュークイ,石川聖二

    北九州医工学術者協会誌   2001年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)

  • VRによる融合空間の生成と応用―仮想オブジェクト生成のためのモーションキャプチャ法

    タンジュークイ,石川聖二,金亨燮,加藤了三

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 5 )   106 - 109   2001年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • VRによる融合空間の生成と応用―仮想オブジェクトのオクルージョンの実現法

    石川聖二,タン ジュークイ,金 亨燮,山下貴夫,加藤了三

    九州工業大学サテライト・ベンチャー・ビジネス・ラボラトリー年報   ( 5 )   110 - 113   2001年05月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(大学,研究機関等紀要)

  • An Efficient Technique for Modelling 3-D Objects 査読有り

    Tan,J. K.,Ishikawa,S

    Information Modelling and Knowledge Bases XII(IOS Press,Ohmsha)   291 - 298   2001年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)

    主要雑誌 代表的研究業績

  • Three Dimensional Motion Recovery of Group Sports by Uncalibrated Video Cameras 査読有り

    Tan J. K.,Ishikawa S.,Yoshino H.

    Proc of The Sixth International Conference on Control, Automation, Robotics and Vision   CD-ROM   2000年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    シンガポール   2000年12月05日  -  2000年12月08日

  • A 3-D motion recovery technique for group sports employing uncalibrated video cameras 査読有り

    Tan J. K.,Ishikawa S. Hayashi K.

    International workshop on Machine Vision Application   447 - 450   2000年11月

     詳細を見る

    記述言語:英語