安川 真輔 (ヤスカワ シンスケ)

YASUKAWA Shinsuke

写真a

職名

准教授

研究室住所

福岡県北九州市若松区ひびきの2-4

研究分野・キーワード

画像センシング,神経模倣工学,フィールドロボティクス

研究室電話

093-695-6140

ホームページ

http://www.brain.kyutech.ac.jp/~s-yasukawa/

取得学位 【 表示 / 非表示

  • 大阪大学 -  博士(工学)  2017年01月

学内職務経歴 【 表示 / 非表示

  • 2018年10月
    -
    継続中

    九州工業大学   大学院生命体工学研究科   人間知能システム工学専攻   准教授  

 

論文 【 表示 / 非表示

  • Development and Sea Trials of Vision-Based-Control for Sampling-AUV

    Shinsuke Yasukawa, Yuya Nishida, Jyonhyon Ahn, Takashi Sonoda, Keisuke Watanabe, Kazuo Ishii

    2019 IEEE Symposium on Underwater Technology      2019年04月  [査読有り]

    DOI

  • Benthos Sampling by Autonomous Underwater Vehicle Equipped a Manipulator with Suction Device

    Jonghyun Ahn, Takashi Sonoda, Kazuo Ishii, Shinsuke Yasukawa, Sotaro Takashima, Yuya Nishida, Tamaki Ura

    2019 IEEE International Underwater Technology Symposium, UT 2019 - Proceedings      2019年04月  [査読有り]

    DOI

  • End-effector for robotic harvesting of a tomato fruit with calyx

    Binghe Li, Shinsuke Yasukawa, Takuya Fujinaga, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019      2019年01月  [査読有り]

  • System Development of AUV’s Sampling Device Controller Employing MATLAB/Simulink Toolboxes

    Takashi Sonoda, Shinsuke Yasukawa, Ahn Jonghyun, Yuya Nishida, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019      2019年01月  [査読有り]

  • Saliency Map Based on Maximization of Difference between Central and Surround Visions and Its Relationship with Image Feature Points

    Ryuugo Mochizuki, Shinsuke Yasukawa, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019      2019年01月  [査読有り]

全件表示 >>

口頭発表・ポスター発表等 【 表示 / 非表示

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第四報):―トマト果実の熟度及び位置情報のマッピング―

    藤永 拓矢, 安川 真輔, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集  2018年01月  -  2018年01月   

     概要を見る

    <p>This paper presents a generation method of tomato growth state map that has maturity and position of tomato. The tomato growth state map is used for the tomato harvesting robot to automatically plan the harvesting movement. As a first step of generating the tomato growth state map, a mosaic image expressing the tomato cultivation environment is generated. Next, immature to mature tomato are recognized by recognition method using infrared image. Based on color and depth information, maturity of tomato is judged and tomato position is calculated. This information is added to the mosaic image, and the tomato growth state map is generated. 19 immature tomatoes and 7 mature tomatoes were automatically detected from tomato state growth map that generated from 10 images.</p>

    CiNii

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第三報):―栽培環境を表現するモザイク画像生成のための特徴点マッチング―

    藤永 拓矢, 安川 真輔, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集  2017年01月  -  2017年01月   

     概要を見る

    <p>This paper presents a feature point matching to generate mosaic image that expresses a cultivation environment, as a part of behavior strategy of tomato harvesting robot. Since that mosaic image requires tomato fruits and stems in front face, in this study, an IR image suitable for detecting the object in front is used. Proposed method uses robot moving distance and location of original feature point to decide detection area to search for feature point. Compare features of feature points in this area and original feature point, then decide matching point. Feature point matching of two consecutive infrared images was performed. By adding detection area of feature point, it is possible to reduce false matching.</p>

    CiNii

  • 深海底生物捕獲ロボットのための画像センシングシステム

    安川 真輔, 安 鍾賢, 西田 祐也, 園田 隆, 石井 和男, 浦 環

    ロボティクス・メカトロニクス講演会講演概要集  2017年01月  -  2017年01月   

     概要を見る

    <p>We developed an image sensing system for an autonomous underwater robot with a benthos sampling function, namely sampling-AUV. The mission of a sampling-AUV consist of following four phases: observation phase (OP), return phase (RP), tracking phase (TP) and sampling phase (SP). For an achievement of the above mission, proposed image sensing system comprises software modules of image acquisition, image enhancement, object detection and image selection, and object tracking. The camera in proposed system acquires an image by intervals of five seconds during OP and RP and of a second during TP. The system completes all processing within image acquisition times by employing high-speed algorithms. We checked operations of proposed system in a pool.</p>

    CiNii

  • 吸引と切断機構を用いたトマト収穫用エンドエフェクタの開発

    李 冰賀, 浦 孝徳, 藤永 拓矢, 安川 真輔, 園田 隆, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集  2017年01月  -  2017年01月   

     概要を見る

    <p>In recent years, the introduction of robot technology to the primary industry is very important, and introduction to the agricultural field is also expected. Among them, tomatoes have a long harvesting work time and high labor load. Therefore, automation of harvesting by robots is desired. In this research, we are targeting tomato production facilities with large-scale infrastructure. We are developing tomato harvesting robot aiming at practical application at Hibikinada Greenfarm Co., Ltd.. In this paper, we focused on end effector for harvesting tomato. When harvesting tomatoes, it need to have a stem end. In the previous study, it was pluck a tomato from a stem, but stem end took off from tomatoes. In order to harvest a tomato from cluster with stem end, we developed an end effector using suction mechanism and cutting mechanism.</p>

    CiNii

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第二報):―赤外線画像と鏡面反射を利用した果実検出技術を中心に―

    安川 真輔, 西尾 将冶, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集  2016年01月  -  2016年01月   

     概要を見る

    <p>This paper presents a novel ripe tomato fruit detection method within a tomato cluster, as a part of behavior strategy of tomato harvesting robot. Proposed method uses infrared images and specular reflection based on the optical characteristic of tomatoes. The fruits of the tomato clusters are detected separately on the captured infrared images, searching the strong response at the center of fruits caused by specular reflection. To recognize the ripe tomatoes from the unripe fruits and other background objects, RGB images were used. The method was evaluated using infrared and RGB images captured by Xbox One Kinect sensor in a greenhouse environment. The result shows the detection accuracy of 88.1 percent of the proposed method.</p>

    CiNii

全件表示 >>