2022/05/25 更新

写真a

ヤスカワ シンスケ
安川 真輔
YASUKAWA Shinsuke
Scopus 論文情報  
総論文数: 0  総Citation: 0  h-index: 7

Citation Countは当該年に発表した論文の被引用数

所属
大学院生命体工学研究科 人間知能システム工学専攻
職名
准教授
研究室電話
093-695-6140
外部リンク

研究キーワード

  • 神経模倣工学

  • フィールドロボティクス

  • 画像センシング

取得学位

  • 大阪大学  -  博士(工学)   2017年01月

学内職務経歴

  • 2018年10月 - 現在   九州工業大学   大学院生命体工学研究科   人間知能システム工学専攻     准教授

論文

  • Tomato Harvesting in Greenhouse Considering the Effect of Sunlight 査読有り 国際誌

    Kai Shioji, Shinsuke Yasukawa, Kazuo Ishii

    Proceedings of 2022 International Conference on Artificial Life and Robotics   27   360 - 364   2022年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2022年01月20日  -  2022年01月23日

    Kyutacar

  • Exercise on Environmental Monitoring and Control of Green house by IoT Devices toward Smart Agriculture 査読有り 国際誌

    Yuya Nishida, Ryuugo Mochizuki, Shinsuke Yasukawa, Kazuo Ishii

    Proceedings of 2022 International Conference on Artificial Life and Robotics   27   367 - 373   2022年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2022年01月20日  -  2022年01月23日

    Kyutacar

  • Fall Risk Notification System using LiDAR Sensor for the Visually Impaired People 査読有り 国際誌

    Daigo Katayama, Kazuo Ishii, Shinsuke Yasukawa, Satoshi Nakadomari, Koichi Wada, Akane Befu, Chikako Yamada

    Proceedings of 2022 International Conference on Artificial Life and Robotics   27   745 - 749   2022年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2022年01月20日  -  2022年01月23日

    Kyutacar

  • Development of Remotely Operated Vehicle for Small-size Jellyfish Extermination and its Evaluation of Extermination Motion Control 査読有り 国際誌

    Hiroyuki Yokota, Shinsuke Yasukawa, Jonghyun Ahn

    Proceedings of 2022 International Conference on Artificial Life and Robotics   27   797 - 802   2022年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2022年01月20日  -  2022年01月23日

  • A robotic vision system emulating fixational eye movements and retinal sampling 査読有り 国際誌

    Takanori Yotsumoto, Yuki Hayashida, Shinsuke Yasukawa

    Proceedings of 2022 International Conference on Artificial Life and Robotics   27   519 - 523   2022年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2022年01月20日  -  2022年01月23日

  • 施設園芸を対象としたトマト果実自動収穫ロボットの開発 招待有り 査読有り

    藤永 拓矢, 安川 真輔, 石井 和男

    日本ロボット学会誌 ( 一般社団法人 日本ロボット学会 )   39 ( 10 )   921 - 925   2021年12月

     詳細を見る

    記述言語:日本語   掲載種別:記事・総説・解説・論説等(学術雑誌)

    <p></p>

    DOI: 10.7210/jrsj.39.921

    Kyutacar

    CiNii Article

    CiNii Research

    その他リンク: https://ci.nii.ac.jp/naid/130008130004

  • 小型クラゲ駆除を目的とした自律型水中ロボットの開発および 駆除装置の性能評価 査読有り

    安鍾賢, 筑紫彰太, 安川真輔, 園田隆

    設計工学 ( 公益社団法人 日本設計工学会 )   57 ( 4 )   181 - 190   2021年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

    <p>In recent years, extermination work has been carried out due to the increase in the number of jellyfish. In this research, we design and develop an AUV (Autonomous Underwater Vehicle) and conduct a jellyfish crushing experiment for the purpose of automating jellyfish extermination work. In the design and development of an AUV, electronic parts to operate autonomously were selected, and each electronic part was placed inside the hull to create a system diagram of them. Regarding the jellyfish extermination device, we designed and developed a device that sucks and crushes jellyfish together with water from the suction port. In this experiment, a jellyfish extermination device was mounted on developed AUV, and a crushing experiment was conducted using a jellyfish sample which is made by water and gelatin. As a result, it was confirmed that a jellyfish sample with a diameter of about 7 cm and a height of about 11 cm could be crushed to small pieces, which has average of volume 2885.6 mm<sup>3</sup>, during less in about 8 seconds.</p>

    DOI: 10.14953/jjsde.2021.2931

    CiNii Article

    CiNii Research

    その他リンク: https://www.jstage.jst.go.jp/article/jjsde/advpub/0/advpub_2021.2931/_article/-char/ja

  • Development and Evaluation of a Tomato Fruit Suction Cutting Device 査読有り

    Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii

    2021 IEEE/SICE International Symposium on System Integrations (SII) ( IEEE/SICE )   628 - 633   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Evaluation of Tomato Fruit Harvestability for Robotic Harvesting 査読有り

    Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii

    2021 IEEE/SICE International Symposium on System Integrations (SII) ( IEEE/SICE )   35 - 39   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Underwater 3D Scanner Using RGB Laser Pattern 査読有り 国際誌

    Yuya Nishida, Shinsuke Yasukawa, Kazuo Ishii

    2021 IEEE/SICE International Symposium on System Integrations (SII) ( IEEE/SICE )   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Underwater image reconstruction using convolutional auto-encoder 査読有り

    Shinsuke Yasukawa, Sreeraman Srinivasa Raghura, Yuya Nishida, Kazuo Ishii

    Proc. of ICAROB2021   262 - 265   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Tomato-harvesting-robot competition towards smart agriculture 招待有り 査読有り

    Kazuo Ishii, Takayuki Matsuo, Yasunori Takemura, Takashi Sonoda, Yuya Nishida, Shinsuke Yasukawa, Takuya Fujinaga

    Proc. of ICAROB2021   1 - 5   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • A Greenhouse Project toward Smart Agriculture 査読有り

    Kazuo Ishii, Yuya Nishida, Shinsuke Yasukawa, Kanako Shirahashi, Yasunori Takemura, Takayuki Matsuo

    Proc. of ICAROB2021   409 - 412   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Tomato-Harvesting Robot Competition: Aims and Developed Robot of 6th Competitions 査読有り

    Takayuki Matsuo, Yasunori Takemura, Takashi Sonoda, Yuya Nishida, Shinsuke Yasukawa, Kazuo Ishii

    Proc. of ICAROB2021   397 - 400   2021年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Tomato growth state map for the automation of monitoring and harvesting 査読有り

    Fujinaga T., Yasukawa S., Ishii K.

    Journal of Robotics and Mechatronics   32 ( 6 )   1279 - 1291   2020年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2020, Fuji Technology Press. All rights reserved. To realize smart agriculture, we engaged in its system-atization, from monitoring to harvesting tomato fruits using robots. In this paper, we explain a method of generating a map of the tomato growth states to mon-itor the various stages of tomato fruits and decide a harvesting strategy for the robots. The tomato growth state map visualizes the relationship between the maturity stage, harvest time, and yield. We propose a generation method of the tomato growth state map, a recognition method of tomato fruits, and an estimation method of the growth states (maturity stages and harvest times). For tomato fruit recognition, we demonstrate that a simple machine learning method using a limited learning dataset and the optical properties of tomato fruits on infrared images exceeds more com-plex convolutional neural network, although the results depend on how the training dataset is created. For the estimation of the growth states, we conducted a survey of experienced farmers to quantify the maturity stages into six classifications and harvest times into three terms. The growth states were estimated based on the survey results. To verify the tomato growth state map, we conducted experiments in an ac-tual tomato greenhouse and herein report the results.

    DOI: 10.20965/jrm.2020.p1279

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85098498979&origin=inward

  • Robot competition for underwater technology researchers and students 査読有り

    Yuya Nishida, Takashi Sonoda, Takayuki Matsuo, Shinsuke Yasukawa, Masanori Sato, Yasunori Takemura, Kazuo Ishii

    Journal of Advances in Artificial Life Robotics   1 ( 1 )   11 - 15   2020年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • An Optical Image Transmission System for Deep Sea Creature Sampling Missions Using Autonomous Underwater Vehicle 査読有り

    Ahn J., Yasukawa S., Sonoda T., Nishida Y., Ishii K., Ura T.

    IEEE Journal of Oceanic Engineering   45 ( 2 )   350 - 361   2020年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 1976-2012 IEEE. The exploration of oceans using autonomous underwater vehicles (AUVs) is necessary for activities, such as the sustainable management of fishery resources, extraction of seafloor minerals and energy resources, and inspection of underwater infrastructure. As the next step in ocean exploration, AUVs are expected to employ end-effectors to make physical contact with seafloor creatures and materials. We propose a scenario for realizing a sampling mission using an AUV that is equipped to sample marine life. In this scenario, the sampling AUV observes the seafloor while concurrently transmitting the observed images to a surface vessel for inspection by the AUV operators. If the received images show an object of interest, the object is selected as a candidate of sampling target by the operators, who send a sampling command to the AUV. After receiving the command, the AUV returns to the target area and attempts to sample it. In this paper, we propose a system for transmitting images of the seafloor as part of the sampling-mission scenario. The proposed image transmission system includes a process for enhancing images of the deep seafloor, a process for selecting interesting images, and processes for compressing and reconstructing images. The image enhancement process resolves imaging problems resulting from light attenuation, such as color attenuation and uneven illumination. The process for selecting interesting images selects those that contain interesting objects, such as marine life. The selection process prevents the transmission of meaningless images that contain only flat sand on the seafloor. The proposed image compression method, which is based on color depth compression, reduces the amount of data. The combined process of selecting an interesting image and compressing it reduces various problems in acoustic communication, such as low information density and data loss. Instead of an overall image, part of an overall image is transmitted by a set of data packet, and each received data packet is reconstructed onboard the vessel. Because of image compression, the colors of a reconstructed image differ from those of an enhanced image. However, the reconstructed image contains similar colors, and the structural similarity index was found to be 91.4% by evaluating images that were subjected to a 4-b color compression. The proposed image transmission system was tested in the Sea of Okhotsk, and these tests were performed four times in different sea areas (minimum depth 380 m, maximum depth 590 m). The results show that the size of the data for a single image was reduced by a factor of 18 using the proposed image compression process, with each image taking 3.7 s to be transmitted via an acoustic modem (20 kb/s). Of the automatically selected images, 63% contained marine life, and the total transmission success rate was 22%.

    DOI: 10.1109/JOE.2018.2872500

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85055860272&origin=inward

  • Field Experiments of Underwater Image Transmission for AUV 査読有り

    Shinsuke Yasukawa, Yuya Nishida, Jonghyun Ahn, Takashi Sonoda, Kentaro Yanagise, Keisuke Watanabe, Kazuo Ishii

    In Proceedings of the 2020 International Conference on Artificial Life and Robotics   171 - 174   2020年01月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2020年01月13日  -  2020年01月16日

  • Reports on the 7th Underwater Robot Festival in Kitakyushu 査読有り

    Yuya Nishida, Takashi Sonoda, Takayuki Matsuo, Shinsuke Yasukawa, Masanori Sato, Yasunori Takemura, Kazuo Ishii

    In Proceedings of the 2020 International Conference on Artificial Life and Robotics   110 - 113   2020年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2020年01月13日  -  2020年01月16日

  • Sea Trials for Benthos Sampling Using Autonomous Underwater Vehicle 査読有り

    Yuya Nishida, Shinsuke Yasukawa, Takashi Sonoda, Keisuke Watanabe, Kazuo Ishii

    In Proceedings of the 2020 International Conference on Artificial Life and Robotics   167 - 170   2020年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Oita   2020年01月13日  -  2020年01月16日

  • Evaluation of Underwater Vehicle's Self-Localization Based on Visual Odometry or Sensor Odometry 査読有り

    Yoshiki Tanaka, Akio Semmyo, Yuya Nishida, Shinsuke Yasukawa, Jonghyun Ahn, Kazuo Ishii

    In Proc. of the 14th IEEE International Conference on Industrial and Information Systems (ICIIS) 2019   1 - 6   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Sri Lanka   Peradeniya   2019年12月18日  -  2019年12月20日

  • Entropy-Based Exposure Time Setting for an Adaptive Multi-Exposure Image Fusion 査読有り 国際誌

    Shinsuke YASUKAWA, Tomohiro NASUNO, Kazuo ISHII

    In Proc. of the 14th IEEE International Conference on Industrial and Information Systems (ICIIS) 2019   1 - 6   2019年12月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Sri Lanka   Peradeniya   2019年12月18日  -  2019年12月20日

  • Entropy-Based Exposure Time Setting for an Adaptive Multi-Exposure Image Fusion 査読有り

    Yasukawa S., Nasuno T., Ishii K.

    2019 IEEE 14th International Conference on Industrial and Information Systems: Engineering for Innovations for Industry 4.0, ICIIS 2019 - Proceedings   529 - 534   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2019 IEEE. We investigated the setting method of exposure times for an adaptive multi-exposure image fusion. Because the range of luminance in the natural environment is wide and objects in captured image have inherent optical properties, the vision system need to adapt to the input light. The proposed system comprises an image sensor module and a field-programmable gate array (FPGA) which is composed of control signal generator for an image sensor, HDR (High-dynamic-range) image generator/evaluator. In this study, we employed and evaluated maximum entropy of HDR image as the exposure time setting method.

    DOI: 10.1109/ICIIS47346.2019.9063302

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85084139742&origin=inward

  • Evaluation of underwater vehicle's self-localization based on visual odometry or sensor odometry 査読有り

    Tanaka Y., Semmyo A., Nishida Y., Yasukawa S., Ahn J., Ishii K.

    2019 IEEE 14th International Conference on Industrial and Information Systems: Engineering for Innovations for Industry 4.0, ICIIS 2019 - Proceedings   384 - 389   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2019 IEEE. This research focuses on two self-localization methods for observation of the sea floor and sampling. One is a method of estimate the self-localization using Kalman filter from the acceleration data calculated from equation of motion and the velocity data considering the effect of underwater vehicle's oscillation. The other is visual odometry using a stereo camera. The AUV was deployed in a sea area 40m depth to evaluate the performance of self-localization estimated by two methods. Self-localization estimation using Kalman filter was less accurate than visual odometry, but it was confirmed that the variance of the estimated velocity was smaller than before estimation. In the visual odometry using stereo camera, it was confirmed that the estimation error depends on the travel direction of AUV and the moving direction of a stereo camera.

    DOI: 10.1109/ICIIS47346.2019.9063345

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85084155438&origin=inward

  • Sensing and Modeling Methods for an intelligent tomato harvesting platform 査読有り 国際誌

    Shinsuke Yasukawa, Takuya Fujinaga, Tomohiro Nasuno, Kota Sunamoto, Kazuo Ishii

    In Proc. of International Symposium on Applied Science 2019 (ISAS2019)   1 - 2   2019年10月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    2019年10月18日  -  2019年10月19日

    その他リンク: http://www.iciis.org/

  • System Development of Tomato Harvesting Robot Based on Modular Design 査読有り

    Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii

    In Proc. of Smart Info-Media Systems in Asia (SISA 2019), 2019 International Workshop on SISA 2019   1 - 6   2019年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Japan   Tokyo   2019年09月04日  -  2019年09月06日

    その他リンク: http://www.ieice-sisa.org/

  • Development and Sea Trials of Vision-Based-Control for Sampling-AUV 査読有り

    Shinsuke Yasukawa, Yuya Nishida, Jyonhyon Ahn, Takashi Sonoda, Keisuke Watanabe, Kazuo Ishii

    2019 IEEE Symposium on Underwater Technology   2019年04月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/UT.2019.8734462

  • Benthos Sampling by Autonomous Underwater Vehicle Equipped a Manipulator with Suction Device 査読有り

    Jonghyun Ahn, Takashi Sonoda, Kazuo Ishii, Shinsuke Yasukawa, Sotaro Takashima, Yuya Nishida, Tamaki Ura

    2019 IEEE International Underwater Technology Symposium, UT 2019 - Proceedings   2019年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/UT.2019.8734330

  • End-effector for robotic harvesting of a tomato fruit with calyx 査読有り

    Binghe Li, Shinsuke Yasukawa, Takuya Fujinaga, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019   2019年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • System Development of AUV’s Sampling Device Controller Employing MATLAB/Simulink Toolboxes 査読有り

    Takashi Sonoda, Shinsuke Yasukawa, Ahn Jonghyun, Yuya Nishida, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019   2019年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Saliency Map Based on Maximization of Difference between Central and Surround Visions and Its Relationship with Image Feature Points 査読有り

    Ryuugo Mochizuki, Shinsuke Yasukawa, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2019   2019年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Inspection System for Underwater Structure of Bridge Pier 査読有り

    Takumi Ueda , Hiyoyasu Hirai, Kazuki Fuchigami, Ryoma Yuki, Ahn Jonghyun, Shinsuke Yasukawa ,Yuya Nishida ,Takashi Sonoda , Kazuo Ishii,Katsunori Higashi ,Katsunori Tanaka ,Tomomasa Ikeda

    International Conference of Artificial Life and Robotics 2019   2019年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • An Optical Image Transmission System for Deep Sea Creature Sampling Missions Using Autonomous Underwater Vehicle 査読有り

    Ahn Jonghyun, Shinsuke Yasukawa, Takashi Sonoda, Yuya Nishida, Kazuo Ishii, Tamaki Ura

    IEEE Journal of Oceanic Engineering   2018年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

  • Automatic Detection of Location and Maturity of Tomato Fruit with Wide Range Mosaic Image of Cultivation Block on Dutch Venlo Greenhouse 査読有り

    Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii

    Proceedings of the 9 th International Symposium on Machinery and Mechatronics for Agriculture and Biosystems Engineering (ISMAB)   2018年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Flounders Measurement System Using Obtained Optical Images by AUV 査読有り

    Jonghyun Ahn, Takashi Sonoda, Kazuo Ishii, Shinsuke Yasukawa, Sotaro Takashima, Yuya Nishida, Tamaki Ura

    In 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO)   2018年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/OCEANSKOBE.2018.8559296

  • Recognition of Tomato Fruit Regardless of Maturity by Machine Learning Using Infrared Image and Specular Reflection 査読有り

    Takuya Fujinaga, Shinsuke Yasukawa, Binghe Li, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2018   2018年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Seafloor Image Color Enhancement Method based on Retinex model and Experiment Report in the undersea environment 査読有り

    Jonghyun Ahn, Shinsuke Yasukawa, Yuya Nishida, Takashi Sonoda, Keisuke Watanabe, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2018   2018年02月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Vision System for an Autonomous Underwater Vehicle with a Benthos Sampling Function 査読有り

    Yasukawa Shinsuke, Ahn Jonghyun, Nishida Yuya, Sonoda Takashi, Ishii Kazuo, Ura Tamaki

    Journal of Robotics and Mechatronics   30 ( 2 )   248 - 256   2018年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    <p>We developed a vision system for an autonomous underwater robot with a benthos sampling function, specifically sampling-autonomous underwater vehicle (AUV). The sampling-AUV includes the following five modes: preparation mode (PM), observation mode (OM), return mode (RM), tracking mode (TM), and sampling mode (SM). To accomplish the mission objective, the proposed vision system comprises software modules for image acquisition, image enhancement, object detection, image selection, and object tracking. The camera in the proposed system acquires images in intervals of five seconds during OM and RM, and in intervals of one second during TM. The system completes all processing stages in the time required for image acquisition by employing high-speed algorithms. We verified the effective operation of the proposed system in a pool.</p>

    DOI: 10.20965/jrm.2018.p0248

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130007499244

  • Underwater Platform for Intelligent Robotics and its Application in Two Visual Tracking Systems 査読有り

    Nishida Yuya, Sonoda Takashi, Yasukawa Shinsuke, Nagano Kazunori, Minami Mamoru, Ishii Kazuo, Ura Tamaki

    Journal of Robotics and Mechatronics   30 ( 2 )   238 - 247   2018年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    <p>A hovering-type autonomous underwater vehicle (AUV) capable of cruising at low altitudes and observing the seafloor using only mounted sensors and payloads was developed for sea-creature survey. The AUV has a local area network (LAN) interface for an additional payload that can acquire navigation data from the AUV and transmit the target value to the AUV. In the handling process of the state flow of an AUV, additional payloads can control the AUV position using the transmitted target value without checking the AUV condition. In the handling process of the state flow of an AUV, additional payloads can control the AUV position using the transmitted target value without checking the AUV condition. In this research, water tank tests and sea trials were performed using an AUV equipped with a visual tracking system developed in other laboratories. The experimental results proved that additional payload can control the AUV position with a standard deviation of 0.1 m.</p>

    DOI: 10.20965/jrm.2018.p0238

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130007499268

  • Image Mosaicing Using Multi-Modal Images for Generation of Tomato Growth State Map 査読有り

    Fujinaga Takuya, Yasukawa Shinsuke, Li Binghe, Ishii Kazuo

    Journal of Robotics and Mechatronics   30 ( 2 )   187 - 197   2018年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    <p>Due to the aging and decreasing the number of workers in agriculture, the introduction of automation and precision is needed. Focusing on tomatoes, which is one of the major types of vegetables, we are engaged in the research and development of a robot that can harvest the tomatoes and manage the growth state of tomatoes. For the robot to automatically harvest tomatoes, it must be able to automatically detect harvestable tomatoes positions, and plan the harvesting motions. Furthermore, it is necessary to grasp the positions and maturity of tomatoes in the greenhouse, and to estimate their yield and harvesting period so that the robot and workers can manage the tomatoes. The purpose of this study is to generate a tomato growth state map of a cultivation lane, which consists of a row of tomatoes, aimed at achieving the automatic harvesting and the management of tomatoes in a tomato greenhouse equipped with production facilities. Information such as the positions and maturity of the tomatoes is attached to the map. As the first stage, this paper proposes a method of generating a greenhouse map (a wide-area mosaic image of a tomato cultivation lane). Using the infrared image eases a correspondence point problem of feature points when the mosaic image is generated. Distance information is used to eliminate the cultivation lane behind the targeted one as well as the background scenery, allowing the robot to focus on only those tomatoes in the targeted cultivation lane. To verify the validity of the proposed method, 70 images captured in a greenhouse were used to generate a single mosaic image from which tomatoes were detected by visual inspection.</p>

    DOI: 10.20965/jrm.2018.p0187

    CiNii Article

    その他リンク: https://ci.nii.ac.jp/naid/130007499459

  • Enhancement of Deep-Sea Floor Images Obtained by an Underwater Vehicle and its Evaluation by Crab Recognition 査読有り

    Ahn Jonghyun, Shinsuke Yasukawa, Takashi Sonoda, Yuya Nishida, Tamaki Ura, Kazuo Ishii

    Journal of Marine Science and Technology   2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s00773-017-0442-1

  • Generation of Mosaic Image in Tomato Greenhouse Using Infrared Image 査読有り

    Takuya Fujinaga,Shinsuke Yasukawa, Binghe Li, Kazuo Ishii

    Smart Info-Media Systems in Asia (SISA 2017)   2017年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Sea-floor image transmission system for AUV 査読有り

    Jonghyun Ahn, Shinsuke Yasukawa, Tharindu Weerakoon, Takashi Sonoda, Yuya Nishida, Tamaki Ura, Kazuo Ishii

    In OCEANS 2017-Aberdeen. IEEE.   2017年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/OCEANSE.2017.8084926

  • Development of a Tomato Harvesting Robot 査読有り

    Shinsuke Yasukawa, Binghe Li, Takashi Sonoda, Kazuo Ishii

    International Conference of Artificial Life and Robotics 2017   2017年01月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(国際会議プロシーディングス)

  • Development of an Autonomous Underwater Vehicle with Human-aware Robot Navigation 査読有り

    Yuya Nishida,Takashi Sonoda, Shinsuke Yasukawa, Jonghyun Ahn, Kazunori Nagano, Kazuo Ishii, Tamaki Ura

    MTS/IEEE Oceans’16   2016年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/OCEANS.2016.7761471

  • Real-time Object Tracking Based on Scale-invariant Features Employing Bio-inspired Hardware Engineering and Applications 査読有り

    hinsuke Yasukawa, Hirotsugu Okuno, Kazuo Ishii, Tetsuya Yagi

    Neural Networks   2016年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1016/j.neunet.2016.05.002

  • Image Enhancement and Compression of Deep-Sea Floor Image for Acoustic Transmission 査読有り

    Jonghyun Ahn, Shinsuke Yasukawa, Takashi Sonoda, Yuya Nishida, Kazuo Ishii, Tamaki Ura

    MTS/IEEE Shanghai, OCEANS ’16 Conference   2016年04月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/OCEANSAP.2016.7485453

  • A vision sensor system with a real time multi scale filtering function 査読有り

    Shinsuke Yasukawa, Hirotsugu Okuno, Seiji Kameda, Tetsuya Yagi

    International Journal of Mechatronics and Automation   2014年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1504/IJMA.2014.066367

  • High-Speed Multiple Spatial Band-Pass Filtering using a Resistive Network 査読有り

    Shinsuke Yasukawa, Hirotsugu Okuno, Seiji Kameda, Tetsuya Yagi

    Mechatronics and Automation (ICMA), 2013 IEEE International Conference on   2013年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICMA.2013.6618101

  • Detection of Scale-Invariant Key Points Employing a Resistive Network 査読有り

    Shinsuke Yasukawa, Hirotsugu Okuno, Tetsuya Yagi

    in Proc. of System Integration (SII), 2012 IEEE/SICE International Symposium on   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/SII.2012.6427366

  • An intelligent vision system with adaptive edge enhancement filter

    Shinsuke Yasukawa, Hirotsugu Okuno, Tetsuya Yagi

    2nd Global COE Student Conference on Innovative Electronic Topics (SCIENT 2010)   2010年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

▼全件表示

口頭発表・ポスター発表等

  • 視覚障害者の転落事故低減を目的とした白杖搭載型の電子式歩行補助具の研究開発

    片山大悟, 石井和男, 安川真輔, 仲泊聡, 和田浩一, 別府あかね, 山田千佳子

    第46回 感覚代行シンポジウム 

     詳細を見る

    開催期間: 2021年12月07日 - 2021年12月08日   記述言語:日本語  

  • 駅のホーム縁端検出を目的とした3次元平面検出アルゴリズムの検討

    片山 大悟, 石井 和男, 安川 真輔, 仲泊 聡, 和田 浩一, 別府 あかね, 山田 千佳子

    ロボティクス・メカトロニクス講演会講演概要集  一般社団法人 日本機械学会

     詳細を見る

    開催期間: 2021年06月06日 - 2021年06月08日   記述言語:日本語  

    <p>In this paper, in order to evaluate the plane detection algorithm for detecting the edge of a station platform, we verified the algorithm by running it on the 3D distance data measured at the actual station platform. As the result, we were able to detect the plane with the error that is small enough to distinguish it from steps such as stairs. However, the correct answer rate and the true negative rate were greatly reduced by the distant point clouds and the movement of the sensor itself, so it is necessary to remove the distant point clouds and to reduce the error caused by the movement of the sensor itself in the current algorithm. In the future, we will verify the algorithm in other situations, and implement and verify additional processing.</p>

  • イベントビジョンシステムによる視対象の運動推定法の検討

    安川 真輔, 吉松 秀峻, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集  一般社団法人 日本機械学会

     詳細を見る

    開催期間: 2021年06月06日 - 2021年06月08日   記述言語:日本語  

    <p>We proposed a method for estimating the angular velocity of a rotating visual target robustly to changes in illumination conditions using event-based vision and spiking neural networks (SNNs). By using the output of the event-based vision directly as input to the SNN, the image processing function can be achieved by keeping the advantages of the vision sensor in the previous stage. A board printed with natural images was placed at a certain distance from the vision system as a viewing target. The vision system captured rotating objects at various angular velocities under multiple lighting conditions to generate a data set. Using this data set, we verified whether the SNN can estimate the visual target. We achieved about 80% accuracy on both the training and test datasets in both bright and dark lighting environments.</p>

  • ハードウェア構成の変更可能なトマト収穫ロボットの開発

    藤永 拓矢, 李 冰賀, 砂本 滉太, 田中 怜, 奈須野 智弘, 武藤 冬樹, 安川 真輔, 石井 和男

    ロボティクス・メカトロニクス講演会2019 

     詳細を見る

    開催期間: 2019年06月05日 - 2019年06月08日   記述言語:日本語  

  • 多段階露光型カメラの露光時間調整法の検討

    奈須野智弘, 安川真輔, 石井和男

    ロボティクス・メカトロニクス講演会2019 

     詳細を見る

    開催期間: 2019年06月05日 - 2019年06月08日   記述言語:英語  

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第四報):―トマト果実の熟度及び位置情報のマッピング―

    藤永 拓矢, 安川 真輔, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2018年01月   記述言語:日本語  

    <p>This paper presents a generation method of tomato growth state map that has maturity and position of tomato. The tomato growth state map is used for the tomato harvesting robot to automatically plan the harvesting movement. As a first step of generating the tomato growth state map, a mosaic image expressing the tomato cultivation environment is generated. Next, immature to mature tomato are recognized by recognition method using infrared image. Based on color and depth information, maturity of tomato is judged and tomato position is calculated. This information is added to the mosaic image, and the tomato growth state map is generated. 19 immature tomatoes and 7 mature tomatoes were automatically detected from tomato state growth map that generated from 10 images.</p>

    CiNii Article

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第三報):―栽培環境を表現するモザイク画像生成のための特徴点マッチング―

    藤永 拓矢, 安川 真輔, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2017年01月   記述言語:日本語  

    <p>This paper presents a feature point matching to generate mosaic image that expresses a cultivation environment, as a part of behavior strategy of tomato harvesting robot. Since that mosaic image requires tomato fruits and stems in front face, in this study, an IR image suitable for detecting the object in front is used. Proposed method uses robot moving distance and location of original feature point to decide detection area to search for feature point. Compare features of feature points in this area and original feature point, then decide matching point. Feature point matching of two consecutive infrared images was performed. By adding detection area of feature point, it is possible to reduce false matching.</p>

    CiNii Article

  • 深海底生物捕獲ロボットのための画像センシングシステム

    安川 真輔, 安 鍾賢, 西田 祐也, 園田 隆, 石井 和男, 浦 環

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2017年01月   記述言語:日本語  

    <p>We developed an image sensing system for an autonomous underwater robot with a benthos sampling function, namely sampling-AUV. The mission of a sampling-AUV consist of following four phases: observation phase (OP), return phase (RP), tracking phase (TP) and sampling phase (SP). For an achievement of the above mission, proposed image sensing system comprises software modules of image acquisition, image enhancement, object detection and image selection, and object tracking. The camera in proposed system acquires an image by intervals of five seconds during OP and RP and of a second during TP. The system completes all processing within image acquisition times by employing high-speed algorithms. We checked operations of proposed system in a pool.</p>

    CiNii Article

  • 吸引と切断機構を用いたトマト収穫用エンドエフェクタの開発

    李 冰賀, 浦 孝徳, 藤永 拓矢, 安川 真輔, 園田 隆, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2017年01月   記述言語:日本語  

    <p>In recent years, the introduction of robot technology to the primary industry is very important, and introduction to the agricultural field is also expected. Among them, tomatoes have a long harvesting work time and high labor load. Therefore, automation of harvesting by robots is desired. In this research, we are targeting tomato production facilities with large-scale infrastructure. We are developing tomato harvesting robot aiming at practical application at Hibikinada Greenfarm Co., Ltd.. In this paper, we focused on end effector for harvesting tomato. When harvesting tomatoes, it need to have a stem end. In the previous study, it was pluck a tomato from a stem, but stem end took off from tomatoes. In order to harvest a tomato from cluster with stem end, we developed an end effector using suction mechanism and cutting mechanism.</p>

    CiNii Article

  • 視覚情報に基づくトマト果実収穫ロボットの行動戦略(第二報):―赤外線画像と鏡面反射を利用した果実検出技術を中心に―

    安川 真輔, 西尾 将冶, 李 冰賀, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2016年01月   記述言語:日本語  

    <p>This paper presents a novel ripe tomato fruit detection method within a tomato cluster, as a part of behavior strategy of tomato harvesting robot. Proposed method uses infrared images and specular reflection based on the optical characteristic of tomatoes. The fruits of the tomato clusters are detected separately on the captured infrared images, searching the strong response at the center of fruits caused by specular reflection. To recognize the ripe tomatoes from the unripe fruits and other background objects, RGB images were used. The method was evaluated using infrared and RGB images captured by Xbox One Kinect sensor in a greenhouse environment. The result shows the detection accuracy of 88.1 percent of the proposed method.</p>

    CiNii Article

  • 自律海中ロボットの視覚情報共有を目的にした画像圧縮と復元手法

    安 鍾賢, 安川 真輔, 園田 隆, 西田 祐也, 石井 和男, 浦 環

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2016年01月   記述言語:日本語  

    <p>The autonomous Underwater Vehicles (AUVs) are developed to contribute various science filed such as oceans biology, geology by deep-sea observation. The deep-sea observation by AUV provides various information such as lifestyle of deep-sea creature, volcano of deep-sea bottom, etc. by photographing image. Next mission using AUVs is required to sample targets such as creature, mud or stone in deep-sea floor. For success of this mission, AUV needs supports by researcher in support ship. However, underwater communication based on acoustic has less density compared to electronic. In this research, we proposed a method of image compression and reconstruction for acoustic communication. And we evaluated reconstruction image by Structural similarity.</p>

    CiNii Article

  • 深海底生生物サンプリングのための水中ロボットマニピュレータの開発:― 第一報:マニピュレータの構造及びシステム ―

    園田 隆, 安川 真輔, 安 鐘賢, ナシライ・アミール・アリ・フォロー, 西田 祐也, 石井 和男, 浦 環

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2016年01月   記述言語:日本語  

    <p>Underwater robots are employed to marine organisms' research in ultimate environments. In this study, we aim to do automation of marine organisms sampling by autonomous underwater vehicle with a manipulator. We developed a manipulator to pick up deep-sea benthic organisms. The developed manipulator is driven by a master-slave cylinder type hydraulic driven system. Moreover, the manipulator's joints are actuated by hydraulic cylinder, and linear motion of the cylinder is converted to rotational motion by a motion conversion cam-mechanism. This control system was integrated by MATLAB/Simulink. So we adopted Stateflow (this is a toolbox for Simulink) in the high level control layer.</p>

    CiNii Article

  • 競技会を通じたトマト収穫ロボットの開発

    石井 和男, 安川 真輔, 園田 隆, 李 冰賀

    日本知能情報ファジィ学会 ファジィ システム シンポジウム 講演論文集 

     詳細を見る

    開催期間: 2015年01月   記述言語:日本語  

    日本において,現在実用化されているロボットのほとんどは産業用ロボットである.農林水産分野におけるロボットの割合はわずかであり,農業用ロボットの研究開発は行われているが,一般的にはまだあまり実用化されていない状況である.トマトは主要な果菜類の一種であり,多くがハウス等の施設で栽培されているが,高温・高湿度な作業環境と長い収穫期間があるため,労働負荷は大きく省力化が望まれている.本論文ではトマトロボット競技会を通じたトマト収穫ロボットの開発について述べる.

    CiNii Article

  • 2P1-C06 視覚情報に基づくトマト果実収穫ロボットの行動戦略 : トマト果房の三次元復元と果実姿勢の推定を中心に

    安川 真輔, 李 氷賀, 園田 隆, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2015年01月   記述言語:日本語  

    This paper presents a visual information aided behavior strategy for tomato harvesting robot. Particularly, we have proposed an image analysis method for estimating the pose of the fruits as it is the most essential information for picking tomato. Tomato harvesting robot includes a six-axis serial manipulator with a monocular hand-eye and an end-effector, and autonomous mobile cart along with a digital computer. Multi-images of tomato bunch captured from different views by hand-eye are used to reconstruct a 3D point cloud of the bunch which provides the basic information for pose estimation. This information is used in the proposed algorithm to estimate the pose of tomatoes with different shapes and sizes.

    CiNii Article

  • 2A1-T08 深海底環境におけるロバストな色彩補正システム構築

    安 鍾賢, 安川 真輔, 石井 和男, 浦 環

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2015年01月   記述言語:日本語  

    It is important to observe deep sea for using sea marine resources. However, observation is difficult. Because, deep sea is an extreme environment that has high pressure, darkness and radio attenuation. Recently, Autonomous Underwater Vehicles (AUVs) are used deep seafloor exploration in order to photograph seafloor. But, the images that photographed by AUVs has low color signal by light attenuation. In this paper, we have proposed a method to enhance color in deep seafloor images. As a result, the proposed method has shown enhanced colors in each image pixel.

    CiNii Article

  • 2A1-N05 トマトロボット競技会を通じたトマト果実収穫用ロボットの開発

    李 冰賀, 林 正明, 西尾 将冶, 安川 真輔, 園田 隆, ナシライ アミール アリ フォロオ, 堀尾 恵一, 石井 和男

    ロボティクス・メカトロニクス講演会講演概要集 

     詳細を見る

    開催期間: 2015年01月   記述言語:日本語  

    In late years, Japanese agriculture is aging and young people away from primary industries. For these problems, by using the robot and information communication technology, labor saving, automation, and "smart agriculture" that enables improvement of productivity is expected. Tomato is a kind of major vegetables, harvest and shipping amount is located in the upper level. However, the time required for harvesting to the total working time per 10a accounts also 30%〜40%. Also, many of the tomatoes are grown indoors, because of the high temperature and high humidity environment, labor burden greater labor-saving is desired. Studies have been made extensive aimed at commercialization of tomato harvesting robot so far. We are through the tomato robot competitions that aim to automate and practical application of the harvest of vegetables.

    CiNii Article

▼全件表示

講演

学術関係受賞

  • SICE International Young Authors Award for SII2021

    九州工業大学   Development and Evaluation of a Tomato Fruit Suction Cutting Device   2021年01月14日

    Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii

     詳細を見る

    受賞国:日本国

担当授業科目(学内)

  • 2021年度   ロボットセンシング

  • 2021年度   知能機械設計演習

  • 2020年度   ロボットセンシング

  • 2020年度   知能機械設計演習

  • 2019年度   ロボットセンシング

  • 2019年度   知能機械設計演習

▼全件表示