2023/06/01 更新

写真a

オカベ タカヒロ
岡部 孝弘
OKABE Takahiro
Scopus 論文情報  
総論文数: 0  総Citation: 0  h-index: 15

Citation Countは当該年に発表した論文の被引用数

所属
大学院情報工学研究院 知能情報工学研究系
職名
教授
メールアドレス
メールアドレス
研究室住所
福岡県飯塚市川津680-4
研究室電話
0948-29-7629
外部リンク

研究キーワード

  • コンピュータビジョン

研究分野

  • 情報通信 / 知覚情報処理

出身学校

  • 1997年03月   東京大学   理学部   物理学科   卒業   日本国

出身大学院

  • 2000年12月   東京大学   理学系研究科   物理学専攻   博士課程・博士後期課程   中退   日本国

  • 1999年03月   東京大学   理学系研究科   物理学専攻   修士課程・博士前期課程   修了   日本国

取得学位

  • 東京大学  -  博士(情報理工学)   2011年03月

  • 東京大学  -  修士(理学)   1999年03月

学内職務経歴

  • 2017年03月 - 現在   九州工業大学   大学院情報工学研究院   知能情報工学研究系     教授

学外略歴

  • 2012年11月 - 2013年03月   東京大学   生産技術研究所   特任准教授   日本国

  • 2007年04月 - 2012年10月   東京大学   生産技術研究所   助教   日本国

  • 2007年01月 - 2007年03月   東京大学   生産技術研究所   助手   日本国

  • 2001年01月 - 2006年12月   東京大学   生産技術研究所   職員(技術系)   日本国

所属学会・委員会

  • 2003年03月 - 現在   IEEE   アメリカ合衆国

  • 2002年01月 - 現在   電子情報通信学会   日本国

  • 2001年10月 - 現在   情報処理学会   日本国

論文

  • Separating Partially-Polarized Diffuse and Specular Reflection Components under Unpolarized Light Sources 査読有り

    Kajiyama S., Piao T., Kawahara R., Okabe T.

    Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023   2548 - 2557   2023年01月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Separating diffuse and specular reflection components observed on an object surface is important for preprocessing of various computer vision techniques. Conventionally, diffuse-specular separation based on the polarimetric and color clues assumes that the diffuse/specular reflection components are unpolarized/partially polarized under unpolarized light sources. However, the diffuse reflection component is partially polarized in fact, because the diffuse reflectance is maximal when the polarization direction is parallel to the outgoing plane. Accordingly, we propose a method for separating partially-polarized diffuse and specular reflection components on the basis of the polarization reflection model and the dichromatic reflection model. In particular, our method enables us not only to achieve diffuse-specular separation but also to estimate the polarimetric properties of the object surface from a single color polarization image. We experimentally confirmed that our method performs better than the method assuming unpolarized diffuse reflection components.

    DOI: 10.1109/WACV56688.2023.00258

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85149009468&origin=inward

  • Event-Based Reflectance Separation 査読有り

    Ryota Kunimasu, Ryo Kawahara, and Takahiro Okabe

    Proc. the 29th International Workshop on Frontiers of Computer Vision (IW-FCV2023)   2023年02月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Online Illumination Planning for Shadow-Robust Photometric Stereo 査読有り

    Hirochika Tanikawa, Ryo Kawahara, and Takahiro Okabe

    Proc. the 28th International Workshop on Frontiers of Computer Vision (IW-FCV2022)   2022年02月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Spectral Absorption from Two-View Hyperspectral Images 査読有り

    Kenta Kageyama, Ryo Kawahara, and Takahiro Okabe

    Proc. the 17th International Conference on Computer Vision Theory and Applications (VISAPP2022)   2022年02月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Per-pixelwater detection on surfaces with unknown reflectance 査読有り 国際誌

    WANG C., OKUYAMA M., MATSUOKA R., OKABE T.

    IEICE Transactions on Information and Systems ( 一般社団法人 電子情報通信学会 )   E104D ( 10 )   1555 - 1562   2021年10月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Water detection is important for machine vision applications such as visual inspection and robot motion planning. In this paper, we propose an approach to per-pixel water detection on unknown surfaces with a hyperspectral image. Our proposed method is based on the water spectral characteristics: water is transparent for visible light but translucent/opaque for near-infrared light and therefore the apparent near-infrared spectral reflectance of a surface is smaller than the original one when water is present on it. Specifically, we use a linear combination of a small number of basis vector to approximate the spectral reflectance and estimate the original near-infrared reflectance from the visible reflectance (which does not depend on the presence or absence of water) to detect water. We conducted a number of experiments using real images and show that our method, which estimates near-infrared spectral reflectance based on the visible spectral reflectance, has better performance than existing techniques.

    DOI: 10.1587/transinf.2021PCP0002

    Scopus

    CiNii Article

    CiNii Research

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85116577979&origin=inward

  • Detecting Momentary Shadows from Visible and Thermal Image Pair 査読有り

    Fujita K., Matsuoka R., Okabe T.

    European Signal Processing Conference   2021-August   606 - 610   2021年08月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Outdoor shadows can be classified into two categories: continuous shadows caused by static objects and momentary shadows caused by moving objects. Since the momentary shadows such as shadows due to a photographer are annoying and do not exist in the original scene, they should be detected and removed for improving image quality. In this paper, we propose a method for detecting momentary shadows from a visible and thermal image pair. The key idea of our proposed method is that the continuous shadows have lower temperature than non-shadow areas, while the momentary shadows have almost the same temperature as the non-shadow areas. Therefore, our method combines the shadow areas detected by using an RGB image and the higher-temperature areas detected by using a thermal image, and then detects the areas of momentary shadows via image segmentation. Through a number of experiments using real visible and thermal image pairs, we show that the combination of visible and thermal images are effective for detecting momentary shadows, and that our method works well for momentary shadows with varying duration time.

    DOI: 10.23919/EUSIPCO54536.2021.9615977

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85123200133&origin=inward

  • Per-Pixel Water and Oil Detection on Surfaces with Unknown Reflectance 査読有り

    Wang C., Okabe T.

    European Signal Processing Conference   2021-August   601 - 605   2021年08月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Water and oil detection is important for machine vision applications such as visual inspection and robot motion planning. It is known that water absorbs near infrared light and oil absorbs near ultraviolet and blue light. Therefore, observing at the absorbed wavelengths, the apparent spectral reflectances of surfaces with water/oil are smaller than that without water/oil. We could detect water/oil based on the above absorption features by using a hyperspectral image, if the original spectral reflectances of surfaces are known. However, in general, the spectral reflectances of surfaces are unknown and spatially varying. In this paper, we propose a novel per-pixel water and oil detection method based on the Lambert-Beer's law and a low-dimensional linear model for spectral reflectance. We show that our method enables us to pixelwisely detect water and oil on surfaces with unknown and spatially-varying reflectance at high accuracy by using a hyperspectral image. The effectiveness of our proposed method is confirmed through a number of experiments using real hyperspectral images.

    DOI: 10.23919/EUSIPCO54536.2021.9616011

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85123208792&origin=inward

  • Illumination planning for measuring per-pixel surface roughness 査読有り

    Arieda K., Okabe T.

    Proceedings of MVA 2021 - 17th International Conference on Machine Vision Applications   2021年07月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Measuring per-pixel surface roughness is useful for machine vision applications such as visual inspection. The surface roughness can be recovered from specular reflection components, but a large number of images taken under different lighting and/or viewing directions is required in general so that sufficient specular reflection components are observed at each pixel. In this paper, we propose a robust and efficient method for per-pixel estimation of surface roughness. Specifically, we propose an illumination planning based on noise propagation analysis; it achieves the surface roughness estimation from a small number of images taken under the optimal set of light sources. Through the experiments using both synthetic and real images, we experimentally show the effectiveness of our proposed method and our setup with a programmable illumination and a polarization camera.

    DOI: 10.23919/MVA51890.2021.9511392

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85113980033&origin=inward

  • Fast Separation of Specular, Diffuse, and Global Components via Polarized Pattern Projection 査読有り

    Nisaka Y., Matsuoka R., Amano T., Okabe T.

    Communications in Computer and Information Science   1405   294 - 308   2021年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    In this paper, we propose a method for fast separation of specular, diffuse, and global components of a dynamic scene by using a projector-camera system. Both the direct-global separation using spatially high-frequency patterns and the specular-diffuse separation based on polarization have been studied, but a straightforward combination of those methods has limited temporal resolution. Accordingly, our proposed method rapidly changes not only the spatial patterns but also the polarization states of illumination by using a self-build polarization projector, and captures their effects on a scene by using a high-speed camera. Our method is easy-to-implement, because it does not require projector-camera temporal synchronization and it automatically calibrates the correspondence between the projection pattern and camera pixel. In addition, our method is robust due to the optimized and quickly-shifted projection pattern and the weights for incorporating spatial correlation. We implemented the prototype setup and achieved fast separation with 60 fps.

    DOI: 10.1007/978-3-030-81638-4_24

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85112707889&origin=inward

  • Hierarchical Gaussian Descriptors with Application to Person Re-Identification 査読有り

    Matsukawa T., Okabe T., Suzuki E., Sato Y.

    IEEE Transactions on Pattern Analysis and Machine Intelligence   42 ( 9 )   2179 - 2194   2020年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification (re-id). Although a covariance descriptor has been successfully applied to person re-id, it loses the local structure of a region and mean information of pixel features, both of which tend to be the major discriminative information for person re-id. In this paper, we present novel meta-descriptors based on a hierarchical Gaussian distribution of pixel features, in which both mean and covariance information are included in patch and region level descriptions. More specifically, the region is modeled as a set of multiple Gaussian distributions, each of which represents the appearance of a local patch. The characteristics of the set of Gaussian distributions are again described by another Gaussian distribution. Because the space of Gaussian distribution is not a linear space, we embed the parameters of the distribution into a point of Symmetric Positive Definite (SPD) matrix manifold in both steps. We show, for the first time, that normalizing the scale of the SPD matrix enhances the hierarchical feature representation on this manifold. Additionally, we develop feature norm normalization methods with the ability to alleviate the biased trends that exist on the SPD matrix descriptors. The experimental results conducted on five public datasets indicate the effectiveness of the proposed descriptors and the two types of normalizations.

    DOI: 10.1109/TPAMI.2019.2914686

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85089203367&origin=inward

  • Inverse Lighting from Cast Shadows under Unknown Radiometric Response Function 査読有り

    Takuto Nakashima, Ryo Matsuoka, Takahiro Okabe

    Proc. the 26th International Workshop on Frontiers of Computer Vision (IW-FCV2020)   2020年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Image-based material editing for making reflective objects fluorescent 査読有り

    Hidaka D., Okabe T.

    VISIGRAPP 2020 - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications   1   355 - 360   2020年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Copyright © 2020 by SCITEPRESS - Science and Technology Publications, Lda. All rights reserved. Fluorescent materials give us a unique sense of quality such as self-luminous ones, because they absorb light with certain wavelengths and then emit light with longer wavelengths. The existing methods for image-based material editing make objects in an image specular, translucent, and transparent, but they do not address fluorescent materials. In this paper, we propose a method for making reflective objects in a single input image fluorescent by adding photorealistic fluorescent components to the objects of interest. Specifically, we show that photometrically consistent fluorescent components can approximately be represented by using the 3-band (RGB) spectral irradiance on the surface of a reflective object, and then compute the fluorescent components on the basis of intrinsic image decomposition without explicitly estimating the object's shape and the light sources illuminating it from the input image. We conducted a number of experiments using both synthetic and real images, and confirmed that our proposed method is effective for making reflective objects fluorescent.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85083579365&origin=inward

  • NMF vs. ICA for light source separation under AC illumination 査読有り

    Oya R., Matsuoka R., Okabe T.

    VISIGRAPP 2020 - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications   4   460 - 465   2020年02月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Copyright © 2020 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved Artificial light sources powered by an electric grid change their intensities in response to the grid’s alternating current (AC). Their flickers are usually too fast to notice with our naked eyes, but can be captured by using cameras with short exposure time settings. In this paper, we propose a method for light source separation under AC illumination on the basis of Blind Source Separation (BSS). Specifically, we show that light source separation results in matrix factorization, since the input images of a scene illuminated by multiple AC light sources are represented by the linear combinations of the basis images, each of which is the image of the scene illuminated by only one of the light sources, with the coefficients, each of which is the intensity of the light source. Then, we make use of Non-negative Matrix Factorization (NMF), because both the pixel values of the basis images and the intensities of the light sources are non-negative. We experimentally confirmed that our method using NMF works better than Independent Component Analysis (ICA), and studied the performance of our method under various conditions: varying exposure times and noise levels.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85083557252&origin=inward

  • コンピュテーショナルイルミネーション: 多色LEDドームを用いた研究事例

    岡部孝弘

    光アライアンス   30 ( 10 )   48 - 53   2019年10月

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語   掲載種別:記事・総説・解説・論説等(商業誌、新聞、ウェブメディア)

  • Reflective and fluorescent separation under narrow-band illumination 査読有り

    Koyamatsu K., Hidaka D., Okabe T., Lensch H.

    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition   2019-June   7569 - 7577   2019年06月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2019 IEEE. In this paper, we address the separation of reflective and fluorescent components in RGB images taken under narrow-band light sources such as LEDs. First, we show that the fluorescent color per pixel can be estimated from at least two images under different light source colors, because the observed color at a surface point is represented by a convex combination of the light source color and the illumination-invariant fluorescent color. Second, we propose a method for robustly estimating the fluorescent color via MAP estimation by taking the prior knowledge with respect to fluorescent colors into consideration. We conducted a number of experiments by using both synthetic and real images, and confirmed that our proposed method works better than the closely related state-of-the-art method and enables us to separate reflective and fluorescent components even from a single image. Furthermore, we demonstrate that our method is effective for applications such as image-based material editing and relighting.

    DOI: 10.1109/CVPR.2019.00776

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85078751947&origin=inward

  • Multispectral direct-global separation of dynamic scenes 査読有り

    Torii M., Okabe T., Amano T.

    Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019   1923 - 1931   2019年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2019 IEEE In this paper, we propose a method for separating direct and global components of a dynamic scene per illumination color by using a projector-camera system; it exploits both the color switch and the temporal dithering of a DLP projector. Our proposed method is easy-to-implement because it does not require any self-built equipment and temporal synchronization between a projector and a camera. In addition, our method automatically calibrates the projector-camera correspondence in a dynamic scene on the basis of the consistency in pixel intensities, and optimizes the projection pattern on the basis of noise propagation analysis. We implemented the prototype setup and achieved multispectral direct-global separation of dynamic scenes in 60 Hz. Furthermore, we demonstrated that our method is effective for applications such as image-based material editing and multispectral relighting of dynamic scenes where wavelength-dependent phenomena such as fluorescence are observed.

    DOI: 10.1109/WACV.2019.00209

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85063569945&origin=inward

  • Coded illumination and imaging for fluorescence based classification 査読有り

    Asano Y., Meguro M., Wang C., Lam A., Zheng Y., Okabe T., Sato I.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   11212 LNCS   511 - 526   2018年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © Springer Nature Switzerland AG 2018. The quick detection of specific substances in objects such as produce items via non-destructive visual cues is vital to ensuring the quality and safety of consumer products. At the same time, it is well-known that the fluorescence excitation-emission characteristics of many organic objects can serve as a kind of “fingerprint” for detecting the presence of specific substances in classification tasks such as determining if something is safe to consume. However, conventional capture of the fluorescence excitation-emission matrix can take on the order of minutes and can only be done for point measurements. In this paper, we propose a coded illumination approach whereby light spectra are learned such that key visual fluorescent features can be easily seen for material classification. We show that under a single coded illuminant, we can capture one RGB image and perform pixel-level classifications of materials at high accuracy. This is demonstrated through effective classification of different types of honey and alcohol using real images.

    DOI: 10.1007/978-3-030-01237-3_31

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85055424143&origin=inward

  • Image-Based Relighting with 5-D Incident Light Fields 査読有り

    Oya S., Okabe T.

    Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017   2018-January   3031 - 3038   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 IEEE. In this paper, we propose a method for image-based relighting with 5-D incident light fields: 4 DoF of the position and direction and 1 DoF of the color of an incident ray. Specifically, we illuminate a scene with various rays by using a two-layer 5 DoF lighting system consisting of a rear-projection display and a transmissive LC panel, and synthesize images under desired 5-D incident light fields by combining the images captured under those rays. Our proposed method efficiently acquires the required images by using coded illumination; it reduces the number of captured images and the measurement time, and enhances their SNRs. In addition, we propose a method for removing the effects of the black offsets due to the projector and the LC panel in the two-layer setup. The experimental results using the prototype system show that our method enables us to synthesize photo-realistic images of scenes where wavelength-dependent phenomena such as fluorescence are observed.

    DOI: 10.1109/ICCVW.2017.358

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85046281183&origin=inward

  • Joint optimization of coded illumination and grayscale conversion for one-shot raw material classification 査読有り

    Wang C., Okabe T.

    British Machine Vision Conference 2017, BMVC 2017   2017年09月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017. The copyright of this document resides with its authors. Classifying materials and their surface states is important for machine vision applications such as visual inspection. In this paper, we propose an approach to one-shot per-pixel classification of raw materials on the basis of spectral BRDFs; a surface of interest is illuminated by multispectral and multidirectional light sources at the same time. Specifically, we achieve two-class classification from a single color image; it directly finds the linear discriminant hyperplane with the maximal margin in the spectral BRDF feature space by jointly optimizing the non-negative coded illumination and the grayscale conversion. In addition, we extend our method to multiclass classification by exploiting the degree of freedom of the grayscale conversion. The experiments using an LED-based multispectral dome show that the performance of our proposed method with only a single image is better than or comparable to the state-of-the-art methods with multiple images.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85084016056&origin=inward

  • Diffuse-specular separation of multi-view images under varying illumination 査読有り

    Takechi K., Okabe T.

    Proceedings - International Conference on Image Processing, ICIP   2017-September   2632 - 2636   2017年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2017 IEEE. Separating diffuse and specular reflection components is important for preprocessing of various computer vision techniques such as photometric stereo. In this paper, we address diffuse-specular separation for photometric stereo based on light fields. Specifically, we reveal the low-rank structure of the multi-view images under varying light source directions, and then formulate the diffuse-specular separation as a low-rank approximation of the 3rd order tensor. Through a number of experiments using real images, we show that our proposed method, which integrates the complement clues based on varying light source directions and varying viewing directions, works better than existing techniques.

    DOI: 10.1109/ICIP.2017.8296759

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85045327495&origin=inward

  • Auto-radiometric calibration in photometric stereo 査読有り

    Mongkulmann W., Okabe T., Sato Y.

    IEICE Transactions on Information and Systems   E99D ( 12 )   3154 - 3164   2016年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © Copyright 2016 The Institute of Electronics, Information and Communication Engineers.We propose a framework to perform auto-radiometric calibration in photometric stereo methods to estimate surface orientations of an object from a sequence of images taken using a radiometrically uncalibrated camera under varying illumination conditions. Our proposed framework allows the simultaneous estimation of surface normals and radiometric responses, and as a result can avoid cumbersome and time-consuming radiometric calibration. The key idea of our framework is to use the consistency between the irradiance values converted from pixel values by using the inverse response function and those computed from the surface normals. Consequently, a linear optimization problem is formulated to estimate the surface normals and the response function simultaneously. Finally, experiments on both synthetic and real images demonstrate that our framework enables photometric stereo methods to accurately estimate surface normals even when the images are captured using cameras with unknown and nonlinear response functions.

    DOI: 10.1587/transinf.2016EDP7150

    Scopus

    CiNii Article

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84999293111&origin=inward

  • Separating reflection components in images under multispectral and multidirectional light sources 査読有り

    Kobayashi N., Okabe T.

    Proceedings - International Conference on Pattern Recognition   3210 - 3215   2016年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2016 IEEE.The appearance of an object depends on the color as well as the direction of a light source illuminating the object. The progress of LEDs enables us to capture the images of an object under multispectral and multidirectional light sources. Separating diffuse and specular reflection components in those images is important for preprocessing of various computer vision techniques such as photometric stereo, material editing, and relighting. In this paper, we propose a robust method for separating reflection components in a set of images of an object taken under multispectral and multidirectional light sources. We consider the set of images as the 3D data whose axes are the pixel, the light source color, and the light source direction, and then show the inherent structures of the 3D data: the rank 2 structure derived from the dichromatic reflection model, the rank 3 structure derived from the Lambert model, and the sparseness of specular reflection components. Based on those structures, our proposed method separates diffuse and specular reflection components by combining sparse NMF and SVD with missing data. We conducted a number of experiments by using both synthetic and real images, and show that our method works better than some of the state-of-the-art techniques.

    DOI: 10.1109/ICPR.2016.7900129

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85019151303&origin=inward

  • Acquiring multispectral light transport using multi-primary DLP projector 査読有り

    Maeda K., Okabe T.

    2016 6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016   2016年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © 2016 IEEE.Acquiring the light transport (LT) of a scene is important for various applications such as radiometric analysis, image-based relighting, and controlling appearance of the scene. The multispectral LT, i.e. the LT in multiple primary colors enables us not only to enhance the color gamut but also to investigate wavelength-dependent interactions between light and a scene. In this paper, we propose a method for acquiring the multispectral LT by using a single off-the-shelf multi-primary DLP (Digital Light Processing) projector; it does not require any self-built equipment, geometric registration, and temporal synchronization. Specifically, based on the rapid color switch due to a rotating color wheel in the projector, we present a method for estimating the spectral properties of the projector in a non-destructive manner, and a method for acquiring the images of a scene illuminated only by one of the primary colors. We conducted a number of experiments by using real images, and confirmed that our method works well and the acquired multispectral LT is effective for radiometric analysis and image-based relighting.

    DOI: 10.1109/IPTA.2016.7820966

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85013141129&origin=inward

  • Acquiring 4D light fields of self-luminous extended light sources using programmable filter 査読有り

    Nakamura M., Oya S., Okabe T., Lensch H.

    IEICE Transactions on Information and Systems   E99D ( 9 )   2360 - 2367   2016年09月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Copyright © 2016 The Institute of Electronics, Information and Communication Engineers.Self-luminous light sources in the real world often have nonnegligible sizes and radiate light inhomogeneously. Acquiring the model of such a light source is highly important for accurate image synthesis and understanding. In this paper, we propose an approach to measuring 4D light fields of self-luminous extended light sources by using a liquid crystal (LC) panel, i.e. a programmable optical filter and a diffuse-reflection board. The proposed approach recovers the 4D light field from the images of the board illuminated by the light radiated from a light source and passing through the LC panel. We make use of the feature that the transmittance of the LC panel can be controlled both spatially and temporally. The approach enables multiplexed sensing and adaptive sensing, and therefore is able to acquire 4D light fields more efficiently and densely than the straightforward method. We implemented the prototype setup, and confirmed through a number of experiments that our approach is effective for modeling self-luminous extended light sources in the real world.

    DOI: 10.1587/transinf.2016EDP7068

    Scopus

    CiNii Article

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84984921032&origin=inward

  • Reflectance and Fluorescence Spectral Recovery via Actively Lit RGB Images 査読有り

    Fu Y., Lam A., Sato I., Okabe T., Sato Y.

    IEEE Transactions on Pattern Analysis and Machine Intelligence   38 ( 7 )   1313 - 1326   2016年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 2015 IEEE.In recent years, fluorescence analysis of scenes has received attention in computer vision. Fluorescence can provide additional information about scenes, and has been used in applications such as camera spectral sensitivity estimation, 3D reconstruction, and color relighting. In particular, hyperspectral images of reflective-fluorescent scenes provide a rich amount of data. However, due to the complex nature of fluorescence, hyperspectral imaging methods rely on specialized equipment such as hyperspectral cameras and specialized illuminants. In this paper, we propose a more practical approach to hyperspectral imaging of reflective-fluorescent scenes using only a conventional RGB camera and varied colored illuminants. The key idea of our approach is to exploit a unique property of fluorescence: the chromaticity of fluorescent emissions are invariant under different illuminants. This allows us to robustly estimate spectral reflectance and fluorescent emission chromaticity. We then show that given the spectral reflectance and fluorescent chromaticity, the fluorescence absorption and emission spectra can also be estimated. We demonstrate in results that all scene spectra can be accurately estimated from RGB images. Finally, we show that our method can be used to accurately relight scenes under novel lighting.

    DOI: 10.1109/TPAMI.2015.2439270

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84976407662&origin=inward

  • Separating Reflective and Fluorescent Components Using High Frequency Illumination in the Spectral Domain 査読有り

    Fu Y., Lam A., Sato I., Okabe T., Sato Y.

    IEEE Transactions on Pattern Analysis and Machine Intelligence   38 ( 5 )   965 - 978   2016年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    © 1979-2012 IEEE.Hyperspectral imaging is beneficial to many applications but most traditional methods do not consider fluorescent effects which are present in everyday items ranging from paper to even our food. Furthermore, everyday fluorescent items exhibit a mix of reflection and fluorescence so proper separation of these components is necessary for analyzing them. In recent years, effective imaging methods have been proposed but most require capturing the scene under multiple illuminants. In this paper, we demonstrate efficient separation and recovery of reflectance and fluorescence emission spectra through the use of two high frequency illuminations in the spectral domain. With the obtained fluorescence emission spectra from our high frequency illuminants, we then describe how to estimate the fluorescence absorption spectrum of a material given its emission spectrum. In addition, we provide an in depth analysis of our method and also show that filters can be used in conjunction with standard light sources to generate the required high frequency illuminants. We also test our method under ambient light and demonstrate an application of our method to synthetic relighting of real scenes.

    DOI: 10.1109/TPAMI.2015.2473839

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84963805175&origin=inward

  • Hierarchical Gaussian descriptor for person re-identification 査読有り

    Matsukawa T., Okabe T., Suzuki E., Sato Y.

    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition   2016-January   1363 - 1372   2016年01月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Describing the color and textural information of a person image is one of the most crucial aspects of person reidentification. In this paper, we present a novel descriptor based on a hierarchical distribution of pixel features. A hierarchical covariance descriptor has been successfully applied for image classification. However, the mean information of pixel features, which is absent in covariance, tends to be major discriminative information of person images. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, we model the region as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, unlike the hierarchical covariance descriptor, the proposed descriptor can model both the mean and the covariance information of pixel features properly. The results of experiments conducted on five databases indicate that the proposed descriptor exhibits remarkably high performance which outperforms the state-ofthe-art descriptors for person re-identification.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84986331442&origin=inward

  • Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis 査読有り

    Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato

    IEEE Transactions on Image Processing   2015年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1109/TIP.2015.2445295

    Scopus

  • Is Spectral Lighting Environment Necessary for Photorealistic Rendering? 査読有り

    Megumi Miura, Takahiro Okabe, Imari Sato

    Proc. the 14th ACM SIGGRAPH International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI2015)   2015年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • From Intensity Profile to Surface Normal: Photometric Stereo for Unknown Light Sources and Isotropic Reflectances 査読有り

    Feng Lu, Yasuyuki Matsushita, Imari Sato, Takahiro Okabe, Yoichi Sato

    IEEE Transactions on Pattern Analysis and Machine Intelligence   2015年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1109/TPAMI.2015.2389841

    Scopus

  • Does inverse lighting workwell under unknown response function? 査読有り

    Ohta S., Okabe T.

    VISAPP 2015 - 10th International Conference on Computer Vision Theory and Applications; VISIGRAPP, Proceedings   1   652 - 657   2015年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Copyright © 2015 SCITEPRESS - Science and Technology Publications All rights reserved.Inverse lighting is a technique for recovering the lighting environment of a scene from a single image of an object. Conventionally, inverse lighting assumes that a pixel value is proportional to radiance value, i.e. the response function of a camera is linear. Unfortunately, however, consumer cameras usually have unknown and nonlinear response functions, and therefore conventional inverse lighting does not work well for images taken by those cameras. In this study, we propose a method for simultaneously recovering the lighting environment of a scene and the response function of a camera from a single image. Through a number of experiments using synthetic images, we demonstrate that the performance of our proposed method depends on the lighting distribution, response function, and surface albedo, and address under what conditions the simultaneous recovery of the lighting environment and response function works well.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84939544780&origin=inward

  • Simultaneous estimation of spectral reflectance and normal from a small number of images 査読有り

    Kitahara M., Okabe T., Fuchs C., Lensch H.

    VISAPP 2015 - 10th International Conference on Computer Vision Theory and Applications; VISIGRAPP, Proceedings   1   303 - 313   2015年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    Copyright © 2015 SCITEPRESS - Science and Technology Publications All rights reserved.Spectral reflectance is inherent characteristics of an object surface and therefore useful not only for computer vision tasks such as material classification but also compute graphics applications such as relighting. In this study, by integrating multispectral imaging and photometric stereo, we propose a method for simultaneously estimating the spectral reflectance and normal per pixel from a small number of images taken under multispectral and multidirectional light sources. In addition, taking attached shadows observed on curved surfaces into consideration, we derive the minimum number of images required for the simultaneous estimation and propose a method for selecting the optimal set of light sources. Through a number of experiments using real images, we show that our proposed method can estimate spectral reflectances without the ambiguity of per-pixel scales due to unknown normals, and that, when the optimal set of light sources is used, our method performs as well as the straightforward method using a large number of images. Moreover, we demonstrated that estimating both the spectral reflectances and normals is useful for relighting under novel illumination conditions.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84939519364&origin=inward

  • Multiframe super-resolution for flickering objects 査読有り

    Fukushima A., Okabe T.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   9257   184 - 194   2015年01月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    © Springer International Publishing Switzerland 2015.In this paper, we propose a MAP-based multiframe superresolution method for flickering objects such as LED electronic message boards. Since LED message boards often flicker at low refresh rates, missing areas where LEDs are off during the exposure time of a camera by chance are observed. To suppress unexpected artifacts due to those missing areas, our proposed method detects outlier pixels on the basis of the spatio-temporal analysis of pixel values, and removes them from the MAP estimation by incorporating the weights of pixels into the likelihood term. We conducted a number of experiments using both real and synthetic images, and qualitatively and quantitatively confirmed that our method works better than the existing methods.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84945962282&origin=inward

  • Fast Spectral Reflectance Recovery Using DLP Projector 査読有り

    Shuai Han, Imari Sato, Takahiro Okabe, Yoichi Sato

    International Journal of Computer Vision   2014年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1007/s11263-013-0687-z

    Scopus

  • Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter 査読有り

    Motohiro Nakamura, Takahiro Okabe, Hendrik P. A. Lensch

    Proc. the 1st Workshop on Light Field for Computer Vision (LF4CV2014) in conjunction with ECCV2014, LNCS 8926   2014年09月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1007/978-3-319-16181-5_45

    Scopus

  • Person Re-identification via Discriminative Accumulation of Local Features 査読有り

    Tetsu Matsukawa, Takahiro Okabe, Yoichi Sato

    Proc. the 22nd IAPR International Conference on Pattern Recognition (ICPR2014)   2014年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICPR.2014.681

    Scopus

  • Reflectance and Fluorescent Spectra Recovery Based on Fluorescent Chromaticity Invariance under Varying Illumination 査読有り

    Ying Fu, Antony Lam, Yasuyuki Kobashi, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2014)   2014年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2014.278

    Scopus

  • Learning Gaze Biases with Head Motion for Head Pose-Free Gaze Estimation 査読有り

    Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato

    Image and Vision Computing   2014年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1016/j.imavis.2014.01.005

    Scopus

  • Adaptive Linear Regression for Appearance-Based Gaze Estimation 査読有り

    Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato

    IEEE Transactions on Pattern Analysis and Machine Intelligence   2014年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1109/TPAMI.2014.2313123

    Scopus

  • Separating Reflective and Fluorescent Components Using High Frequency Illumination in the Spectral Domain 査読有り

    Ying Fu, Antony Lam, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 14th IEEE International Conference on Computer Vision (ICCV2013)   2013年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICCV.2013.63

    Scopus

  • Head Direction Estimation from Low Resolution Images with Scene Adaptation 査読有り

    Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto

    Computer Vision and Image Understanding   2013年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1016/j.cviu.2013.06.005

    Scopus

  • Spectral Imaging Using Basis Lights 査読有り

    Antony Lam, Art Subpa-Asa, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 24th British Machine Vision Conference (BMVC2013)   2013年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.5244/C.27.41

    Scopus

  • Efficient Modeling of Object's BRDF with Planned Sampling 査読有り

    Asad Ali, Imari Sato, Takahiro Okabe, Yoichi Sato

    IPSJ Transactions on Computer Vision and Applications   2013年07月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.2197/ipsjtcva.5.114

    Scopus

    CiNii Article

  • Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances 査読有り

    Feng Lu, Yasuyuki Matsushita, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2013)   2013年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2013.196

    Scopus

  • Converting Near Infrared Facial Images to Visible Light Images using Skin Pigment Model 査読有り

    Kimshing Goh, Tetsu Matsukawa, Takahiro Okabe, Yoichi Sato

    Proc. the 13th IAPR International Conference on Machine Vision Applications (MVA2013)   2013年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

  • Head pose-free appearance-based gaze sensing via eye image synthesis 査読有り

    Lu F., Sugano Y., Okabe T., Sato Y.

    Proceedings - International Conference on Pattern Recognition   1008 - 1011   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    This paper addresses the problem of estimating human gaze from eye appearance under free head motion. Allowing head motion remains challenging because eye appearance changes significantly for different head poses, and thus new head poses require new training images. To avoid repetitive training, we propose to produce synthetic training images for varying head poses. First, we model pixel displacements between head-moving eye images as 1D pixel flows, and then produce such flows to synthesize new training images from the original training images captured under a fixed default head pose. Specifically, we produce all the required 1D flows by using only four additionally captured images. Our method was successfully tested with extensive experiments to demonstrate its effectiveness. © 2012 ICPR Org Committee.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84874568334&origin=inward

  • Illumination normalization of face images with cast shadows 査読有り

    Matsukawa T., Okabe T., Sato Y.

    Proceedings - International Conference on Pattern Recognition   1848 - 1851   2012年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    We propose a method for extracting and combining small-scale and large-scale illumination insensitive features for face recognition that can work even in the presence of cast shadows. Although several methods have been proposed to extract such features, they are not designed to handle severe lighting variation on a face and thus fail to work if cast shadows are present. In this paper, we extend quotient image-based illumination normalization by explicitly taking cast shadows into account so that illumination insensitive large-scale features can be obtained. The experimental results show that the proposed method achieves favorable normalization results under difficult illuminations with cast shadows. © 2012 ICPR Org Committee.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84874563451&origin=inward

  • Toward Efficient Acquisition of BRDFs with Fewer Samples 査読有り

    Muhammad Ali, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 11th Asian Conference on Computer Vision (ACCV2012)   2012年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1007/978-3-642-37447-0_5

    Scopus

  • Camera Spectral Sensitivity Estimation from a Single Image under Unknown Illumination by Using Fluorescence 査読有り

    Shuai Han, Yasuyuki Matsushita, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2012)   2012年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2012.6247752

    Scopus

  • Bispectral Photometric Stereo Based on Fluorescence 査読有り

    Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2012)   2012年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2012.6247685

    Scopus

  • Incorporating Visual Field Characteristics into a Saliency Map 査読有り

    Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki

    Proc. the 7th ACM Symposium on Eye Tracking Research & Applications (ETRA2012)   2012年03月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1145/2168556.2168629

    Scopus

  • Attention Prediction in Egocentric Video Using Motion and Visual Saliency 査読有り

    Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki

    Proc. the 5th Pacific-Rim Symposium on Image and Video Technology (PSIVT2011)   2011年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1007/978-3-642-25367-6_25

    Scopus

  • Appearance-Based Head Pose Estimation with Scene-Specific Adaptation 査読有り

    Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto

    Proc. the 11th IEEE International Workshop on Visual Surveillance (VS2011)   2011年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICCVW.2011.6130456

    Scopus

  • Photometric Stereo with Auto-Radiometric Calibration 査読有り

    Wiennat Mongkulman, Takahiro Okabe, Yoichi Sato

    Proc. IEEE Color and Photometry in Computer Vision Workshop (CPCV2011)   2011年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICCVW.2011.6130328

    Scopus

  • Inferring Human Gaze from Appearance via Linear Regression 査読有り

    Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato

    Proc. the 13th IEEE International Conference on Computer Vision (ICCV2011)   2011年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/ICCV.2011.6126237

    Scopus

  • Can saliency map models predict human egocentric visual attention? 査読有り

    Yamada K., Sugano Y., Okabe T., Sato Y., Sugimoto A., Hiraki K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   6468 LNCS ( PART1 )   420 - 429   2011年09月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    The validity of using conventional saliency map models to predict human attention was investigated for video captured with an egocentric camera. Since conventional visual saliency models do not take into account visual motion caused by camera motion, high visual saliency may be erroneously assigned to regions that are not actually visually salient. To evaluate the validity of using saliency map models for egocentric vision, an experiment was carried out to examine the correlation between visual saliency maps and measured gaze points for egocentric vision. The results show that conventional saliency map models can predict visually salient regions better than chance for egocentric vision and that the accuracy decreases significantly with an increase in visual motion induced by egomotion, which is presumably compensated for in the human visual system. This latter finding indicates that a visual saliency model is needed that can better predict human visual attention from egocentric videos. © 2011 Springer-Verlag Berlin Heidelberg.

    DOI: 10.1007/978-3-642-22822-3_42

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=80053129644&origin=inward

  • 審美的品質識別のための局所領域の組合せによる色彩調和の評価 査読有り

    西山正志, 岡部孝弘, 佐藤いまり, 佐藤洋一

    電子情報通信学会論文誌   2011年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • A Head Pose-free Approach for Appearance-based Gaze Estimation 査読有り

    Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato

    Proc. the 22th British Machine Vision Conference (BMVC2011)   2011年08月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.5244/C.25.126

    Scopus

  • 自己相似性に基づく高時間分解能映像の生成 査読有り

    島野美保子, 岡部孝弘, 佐藤いまり, 佐藤洋一

    電子情報通信学会論文誌   2011年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Aesthetic Quality Classification of Photographs Based on Color Harmony 査読有り

    Masashi Nishiyama, Takahiro Okabe, Imari Sato, Yoichi Sato

    Proc. the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2011)   2011年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2011.5995539

    Scopus

  • Fast Unsupervised Ego-Action Learning for First-Person Sports Videos 査読有り

    Kris M. Kitani, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto

    Proc. the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2011)   2011年06月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/CVPR.2011.5995406

    Scopus

  • Surface Reconstruction in Photometric Stereo with Calibration Error 査読有り

    Michihiro Kobayashi, Takahiro Okabe, Yasuyuki Matsushita, Yoichi Sato

    Proc. the 1st Joint 3DIM/3DPVT Conference (3DIMPVT2011)   2011年05月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1109/3DIMPVT.2011.13

    Scopus

  • Image enhancement of low-light scenes with near-infrared flash images 査読有り

    Matsui S., Okabe T., Shimano M., Sato Y.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   5994 LNCS ( PART 1 )   213 - 223   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    We present a novel technique for enhancing an image captured in low light by using near-infrared flash images. The main idea is to combine a color image with near-infrared flash images captured at the same time without causing any interference with the color image. In this work, near-infrared flash images are effectively used for removing annoying effects that are commonly observed in images of dimly lit environments, namely, image noise and motion blur. Our denoising method uses a pair of color and near-infrared flash images captured simultaneously. Therefore it is applicable to dynamic scenes, whereas existing methods assume stationary scenes and require a pair of flash and no-flash color images captured sequentially. Our deblurring method utilizes a set of near-infrared flash images captured during the exposure time of a single color image and directly acquires a motion blur kernel based on optical flow. We implemented a multispectral imaging system and confirmed the effectiveness of our technique through experiments using real images. © Springer-Verlag 2010.

    DOI: 10.1007/978-3-642-12307-8_20

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=78650466182&origin=inward

  • Image Enhancement of Low-light Scenes with Near-infrared Flash Images 査読有り

    Sosuke Matsui, Takahiro Okabe, Mihoko Shimano, Yoichi Sato

    IPSJ Trans. Computer Vision and Applications   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.2197/ipsjtcva.2.215

    Scopus

  • Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions 査読有り

    Michihiro Kobayashi, Takahiro Okabe, Yoichi Sato

    IEEE Trans. Information Forensics and Security   2010年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)

    DOI: 10.1109/TIFS.2010.2074194

    Scopus

  • Video Temporal Super-Resolution Based on Self-similarity 査読有り

    Mihoko Shimano, Takahiro Okabe, Imari Sato, Yoichi Sato

    Proc. the 10th Asian Conference on Computer Vision (ACCV2010)   2010年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1007/978-3-642-19315-6_8

    Scopus

  • Fast Spectral Reflectance Recovery Using DLP Projector 査読有り

    Shuai Han, Imari Sato, Takahiro Okabe, Yoichi Sato

    Proc. the 10th Asian Conference on Computer Vision (ACCV2010)   2010年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)

    DOI: 10.1007/978-3-642-19315-6_25

    Scopus

  • 陰に基づく符号化による法線推定 査読有り

    岡部孝弘, 佐藤いまり, 佐藤洋一

    電子情報通信学会論文誌   2010年08月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 歩容特徴と局所的見えを考慮した特徴点軌跡のクラスタリングによる混雑環境下人物追跡 査読有り

    杉村大輔, 木谷クリス真実, 岡部孝弘, 佐藤洋一, 杉本晃宏

    電子情報通信学会論文誌   2010年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • 複数の注目領域を用いた写真の主観的品質の識別 査読有り

    西山正志, 岡部孝弘, 佐藤洋一, 佐藤いまり

    電子情報通信学会論文誌   2010年08月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)

  • Recognizing multiple objects based on co-occurrence of categories 査読有り

    Okabe T., Kondo Y., Kitani K., Sato Y.

    Progress in Informatics   ( 7 )   43 - 52   2010年03月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)

    Most previous methods for generic object recognition explicitly or implicitly assume that an image contains objects from a single category, although objects from multiple categories often appear together in an image. In this paper, we present a novel method for object recognition that explicitly deals with objects of multiple categories coexisting in an image. Furthermore, our proposed method aims to recognize objects by taking advantage of a scene's context represented by the co-occurrence relationship between object categories. Specifically, our method estimates the mixture ratios of multiple categories in an image via MAP regression, where the likelihood is computed based on the linear combination model of frequency distributions of local features, and the prior probability is computed from the co-occurrence relation. We conducted a number of experiments using the PASCAL dataset, and obtained the results that lend support to the effectiveness of the proposed method. © 2010 National Institute of Informatics.

    Scopus

    その他リンク: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=77952895152&origin=inward

▼全件表示

著書

  • ディジタル画像処理 [改定第二版]

    奥富正敏編(共著 ,  範囲: 「第16章 光学的解析とシーンの復元」の一部を執筆)

    画像情報教育振興協会  2020年02月  ( ISBN:978-4-903474-64-9

     詳細を見る

    記述言語:日本語

  • ビジュアル情報処理 [改定新版]

    藤代一成・奥富正敏編(共著 ,  範囲: 「第8章 第2節 光学的なシーン復元」を執筆)

    画像情報教育振興協会  2017年03月  ( ISBN:978-4903474571

     詳細を見る

    記述言語:日本語

  • ディジタル画像処理 [改定新版]

    奥富正敏編(共著 ,  範囲: 「第15章 光学的解析とシーンの復元」の一部を執筆)

    画像情報教育振興協会  2015年03月  ( ISBN:978-4903474502

     詳細を見る

    記述言語:日本語

口頭発表・ポスター発表等

  • I-031 自発光光源の4次元ライトフィールドの適応的計測(I分野:グラフィクス・画像,一般論文)

    中村 基裕, 岡部 孝弘

    情報科学技術フォーラム講演論文集 

     詳細を見る

    開催期間: 2015年08月24日   記述言語:日本語  

    CiNii Article

  • H-013 少数の画像からの物体表面の分光反射率と法線の推定(H分野:画像認識・メディア理解,一般論文)

    北原 雅啓, 岡部 孝弘

    情報科学技術フォーラム講演論文集 

     詳細を見る

    開催期間: 2015年08月24日   記述言語:日本語  

    CiNii Article

  • 辞書学習を用いた圧縮センシングによる多波長・多方向光源下画像の効率的な獲得

    小林 直人, 岡部 孝弘

    情報処理学会研究報告. CVIM, [コンピュータビジョンとイメージメディア] 

     詳細を見る

    開催期間: 2015年05月11日   記述言語:日本語  

    物体の見えは,物体を照らす光源の波長と方向の両方に依存する.近年,光源装置の進化により多波長・多方向光源下での物体の撮影が可能になったが,光源数の増加に伴い撮影に要する時間も増大してしまうという問題がある.そこで本稿では,スパース性に基づく圧縮センシングの枠組みで,多波長・多方向光源下の画像を効率的に獲得する手法を提案する.提案手法では,辞書学習により解空間を制限することで,圧縮率・復元精度の向上を目指す.特に,訓練データとしてシミュレーションデータを用いることで実画像訓練データを取得する手間を省くとともに,画素値がスペクトル項と幾何学項の積の和で表現されることに着目して,効率的に辞書を学習する.

    CiNii Article

  • 多原色DLPプロジェクタを用いたマルチスペクトルライトトランスポートの獲得

    前田 茅乃, 岡部 孝弘

    情報処理学会研究報告. CVIM, [コンピュータビジョンとイメージメディア] 

     詳細を見る

    開催期間: 2015年05月11日   記述言語:日本語  

    照明条件とシーンの見えの関係を記述するライトトランスポートを獲得することは,照明シミュレーション,シーンの見えの制御,および,シーンの光学的解析などの応用において重要である.本稿では,色域の拡大による色再現性の向上や,屈折・回折・干渉などの波長に依存する光学現象の解析を目指して,単一の N 原色 (N > 3) DLP プロジェクタを用いてマルチスペクトルライトトランスポートを獲得する手法を提案する.具体的には,プロジェクタの RGB 値と N 原色の分光分布とその強度との関係を推定したのち,様々な RGB 値で獲得したライトトランスポートを N 原色のライトトランスポートに変換する.

    CiNii Article

講演

  • Separating reflective and fluorescent components using high frequency illumination in the spectral domain

    JSAP-OSA Joint Symposia, JSAP 2014  2014年01月 

     詳細を見る

    開催期間: 2014年01月01日   発表言語:英語   講演種別:特別講演  

学術関係受賞

  • Best Paper Award

    The 28th International Workshop on Frontiers of Computer Vision (IW-FCV2022)   Online Illumination Planning for Shadow-Robust Photometric Stereo   2022年02月

    Hirochika Tanikawa, Ryo Kawahara, and Takahiro Okabe

     詳細を見る

    受賞国:日本国

  • Best Student Paper Award

    The 26th International Workshop on Frontiers of Computer Vision (IW-FCV2020)   Inverse Lighting from Cast Shadows under Unknown Radiometric Response Function   2020年02月

    Takuto Nakashima, Ryo Matsuoka, Takahiro Okabe

     詳細を見る

    受賞国:日本国

  • MIRU優秀賞

    第21回画像の認識・理解シンポジウム (MIRU2018)   狭帯域光源下画像の反射成分と蛍光成分の分離   2018年08月

    小屋松孝治, 日高大地, 岡部孝弘, Hendrik Lensch

     詳細を見る

    受賞国:日本国

  • Outstanding Reviewer Award

    The 12th European Conference on Computer Vision (ECCV2012)   2012年10月

    Takahiro Okabe

     詳細を見る

    受賞国:イタリア共和国

  • The 50th Anniversary Best Paper Award

    情報処理学会   2010年11月

    Sosuke Matsui, Takahiro Okabe, Mihoko Shimano, Yoichi Sato

     詳細を見る

    受賞国:日本国

  • Sang Uk Lee Award

    The 10th Asian Conference on Computer Vision (ACCV2010)   2010年11月

    Shuai Han, Imari Sato, Takahiro Okabe, Yoichi Sato

     詳細を見る

    受賞国:ニュージーランド

  • 平成21年度 論文賞

    電子情報通信学会   2010年05月

    木谷クリス真実, 岡部孝弘, 佐藤 洋一, 杉本晃宏

     詳細を見る

    受賞国:日本国

▼全件表示

科研費獲得実績

  • 多波長・多方向光源による蛍光物体の質感編集

    研究課題番号:16H01676  2016年04月 - 2018年03月   新学術領域研究

  • カメラレスポンス関数未知の単一画像の理解

    研究課題番号:26540088  2014年04月 - 2017年03月   挑戦的萌芽研究

  • 明るさ解析と色解析の統一:任意光源環境における見えの理解・認識・生成に向けて

    研究課題番号:25280057  2013年04月 - 2016年03月   基盤研究(B)

  • 画像の生成と理解のための実世界光源のモデリング

    研究課題番号:24650077  2012年04月 - 2014年03月   挑戦的萌芽研究

  • 未較正光源を用いた物体のモデリングとその画像生成への応用

    研究課題番号:22680015  2010年04月 - 2013年03月   若手研究(A)

  • カテゴリの共起に基づく物体の識別と検出

    研究課題番号:20700153  2008年04月 - 2010年03月   若手研究(B)

  • 近接光源下における物体の見えの解析とその認識・モデリングへの応用

    研究課題番号:18700168  2006年04月 - 2008年03月   若手研究(B)

▼全件表示

寄附金・講座

  • コニカミノルタ画像科学奨励賞    2015年03月

海外研究歴

  • 画像に基づく実シーンの幾何学的・光学的特性,および,見えのモデリング

    ウルム大学・テュービンゲン大学  ドイツ連邦共和国  研究期間:  2011年09月 - 2012年09月

担当授業科目(学内)

  • 2022年度   確率・統計

  • 2022年度   情報理論(A)

  • 2022年度   コンピュータビジョンA

  • 2022年度   コンピュテーショナルフォトグラフィ特論AI

  • 2022年度   コンピュテーショナルフォトグラフィ特論MI

  • 2022年度   大学院国際協働演習

  • 2021年度   情報理論(A)

  • 2021年度   コンピュータビジョンA

  • 2021年度   コンピュータビジョン特論Ⅰ

  • 2021年度   コンピュータビジョン特論Ⅱ

  • 2021年度   確率・統計

  • 2020年度   確率・統計

  • 2020年度   情報理論(A)

  • 2020年度   コンピュータビジョンA

  • 2020年度   コンピュータビジョン特論Ⅰ

  • 2020年度   コンピュータビジョン特論Ⅱ

  • 2019年度   確率・統計

  • 2019年度   コンピュータビジョン特論Ⅱ

  • 2019年度   コンピュータビジョンA

  • 2019年度   コンピュータビジョン特論Ⅰ

  • 2018年度   コンピュータビジョン特論Ⅰ

  • 2018年度   確率・統計

  • 2018年度   長期インターンシップ

  • 2018年度   コンピュータビジョンA

  • 2018年度   コンピュータビジョン特論Ⅱ

  • 2017年度   コンピュータビジョン特論Ⅰ

  • 2017年度   確率・統計

  • 2017年度   コンピュータビジョンA

  • 2017年度   コンピュータビジョン特論Ⅱ

  • 2016年度   コンピュータビジョン特論Ⅱ

  • 2016年度   確率・統計

  • 2015年度   コンピュータビジョン特論Ⅱ

  • 2015年度   確率・統計

  • 2014年度   コンピュータビジョン特論Ⅱ

  • 2014年度   確率・統計

  • 2013年度   確率・統計

▼全件表示

学会・委員会等活動

  • その他海外学会   The 24th IAPR International Conference on Pattern Recognition (ICPR2018): Reviewer  

    2018年04月 - 2018年05月

  • IEEE   The 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2018): Reviewer  

    2017年12月 - 2018年02月

  • 電子情報通信学会   第21回画像の認識・理解シンポジウム (MIRU2018): 運営委員長  

    2017年11月 - 2018年08月

  • 電子情報通信学会   和文論文誌(画像の認識・理解特集号): 編集委員  

    2017年10月 - 2018年05月

  • その他海外学会   The 13th International Conference on Computer Vision Theory and Applications (VISAPP2018): Program Committee  

    2017年10月 - 2017年11月

  • その他海外学会   The 13th International Conference on Signal Image Technology and Internet Systems (SITIS2017): Program Committee  

    2017年10月

  • その他海外学会   The 4th Asian Conference on Pattern Recognition (ACPR2017): Reviewer  

    2017年07月 - 2017年08月

  • その他海外学会   The 8th Pacific-Rim Symposium on Image and Video Technology (PSIVT2017): Reviewer  

    2017年07月 - 2017年08月

  • IEEE   The 16th International Conference on Computer Vision (ICCV2017): Reviewer  

    2017年05月 - 2017年07月

  • その他海外学会   The 28th British Machine Vision Conference (BMVC2017): Reviewer  

    2017年05月 - 2017年06月

  • 情報処理学会   情報処理学会コンピュータビジョンとイメージメディア研究会: 運営委員  

    2017年04月 - 2018年03月

  • その他海外学会   The 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI2017): International Program Committee  

    2017年04月 - 2017年05月

  • 電子情報通信学会   第20回画像の認識・理解シンポジウム (MIRU2017): 領域チェア  

    2017年03月 - 2017年05月

  • IEEE   The 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2017): Reviewer  

    2016年12月 - 2017年02月

  • その他海外学会   The 12th International Conference on Computer Vision Theory and Applications (VISAPP2017): Program Committee  

    2016年10月 - 2016年11月

  • その他海外学会   The 12th International Conference on Signal Image Technology and Internet Systems (SITIS2016): Program Committee  

    2016年10月

  • その他海外学会   The 6th International Conference on Image Processing Theory, Tools and Applications (IPTA2016): Technical Program Committee  

    2016年07月 - 2016年08月

  • その他海外学会   The 13th Asian Conference on Computer Vision (ACCV2016): Reviewer  

    2016年07月 - 2016年08月

  • その他海外学会   The 9th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2016): Reviewer  

    2016年05月 - 2016年07月

  • その他海外学会   The 23rd IAPR International Conference on Pattern Recognition (ICPR2016): Reviewer  

    2016年04月 - 2016年05月

  • その他海外学会   The 14th European Conference on Computer Vision (ECCV2016): Reviewer  

    2016年03月 - 2016年05月

  • 情報処理学会   Transactions on Computer Vision and Applications MIRU Conference Editorial Board: Technical Program Committee  

    2016年03月 - 2016年04月

  • IEEE   The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016): Reviewer  

    2015年12月 - 2016年02月

  • その他海外学会   The 11th International Conference on Computer Vision Theory and Applications (VISAPP2016): Program Committee  

    2015年10月 - 2015年11月

  • その他海外学会   The 7th Pacific-Rim Symposium on Image and Video Technology (PSIVT2015): Reviewer  

    2015年07月 - 2015年08月

  • その他海外学会   The 3rd International Conference on 3D Vision (3DV2015): Reviewer  

    2015年06月 - 2015年07月

  • IEEE   The 15th International Conference on Computer Vision (ICCV2015): Reviewer  

    2015年05月 - 2015年07月

  • 情報処理学会   Transactions on Computer Vision and Applications MIRU Conference Editorial Board: Technical Program Committee  

    2015年03月 - 2015年04月

  • IEEE   The 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015): Reviewer  

    2014年12月 - 2015年02月

  • IEEE   The 11th International Conference on Automatic Face and Gesture Recognition (FG2015): Reviewer  

    2014年11月 - 2014年12月

  • その他海外学会   The 10th International Conference on Computer Vision Theory and Applications (VISAPP2015): Program Committee  

    2014年10月 - 2014年12月

  • その他海外学会   The 2nd International Conference on 3D Vision (3DV2014): Reviewer  

    2014年08月 - 2014年09月

  • その他国内学会   The 12th Asian Conference on Computer Vision (ACCV2014): Reviewer  

    2014年07月 - 2014年08月

  • その他国内学会   The 13th European Conference on Computer Vision (ECCV2014): Reviewer  

    2014年03月 - 2014年05月

  • 情報処理学会   Transactions on Computer Vision and Applications MIRU Conference Editorial Board: Technical Program Committee  

    2014年03月 - 2014年04月

  • IEEE   The 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2014): Reviewer  

    2013年12月 - 2014年02月

  • その他国内学会   The 9th International Conference on Computer Vision Theory and Applications (VISAPP2014): Program Committee  

    2013年09月 - 2013年11月

  • その他国内学会   The 6th Pacific-Rim Symposium on Image and Video Technology (PSIVT2013): Reviewer  

    2013年07月

  • 電子情報通信学会   電子情報通信学会パターン認識・メディア理解研究会: 専門委員  

    2013年05月 - 現在

  • IEEE   The 14th International Conference on Computer Vision (ICCV2013): Reviewer  

    2013年04月 - 2013年07月

  • 情報処理学会   Transactions on Computer Vision and Applications MIRU Conference Editorial Board: Technical Program Committee  

    2013年03月 - 2013年04月

  • IEEE   The 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2013): Reviewer  

    2012年12月 - 2013年02月

  • IEEE   The 10th International Conference on Automatic Face and Gesture Recognition (FG2013): Reviewer  

    2012年10月

  • その他国内学会   The 11th Asian Conference on Computer Vision (ACCV2012): Reviewer  

    2012年07月 - 2012年08月

  • その他国内学会   IET Image Processing Conference (IPR2012): Program Committee  

    2012年06月

  • その他国内学会   The 21st IAPR International Conference on Pattern Recognition (ICPR2012): Reviewer  

    2012年04月 - 2012年05月

  • IEEE   The 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2012): Reviewer  

    2012年03月 - 2012年04月

  • 電子情報通信学会   第15回画像の認識・理解シンポジウム (MIRU2012): 査読委員  

    2012年03月 - 2012年04月

  • 電子情報通信学会   和文論文誌(画像の認識・理解特集号): 編集委員  

    2011年10月 - 2012年05月

  • その他国内学会   The 12th European Conference on Computer Vision (ECCV2012): Reviewer  

    2011年03月 - 2012年05月

  • 電子情報通信学会   Transactions on Information and Systems: Associate Editor  

    2010年05月 - 2014年06月

  • 情報処理学会   情報処理学会コンピュータビジョンとイメージメディア研究会: 運営委員  

    2009年04月 - 2013年03月

  • 電子情報通信学会   電子情報通信学会論文誌: 査読委員  

    2005年11月 - 現在

▼全件表示

社会貢献活動(講演会・出前講義等)

  • 受賞:平成26年度 電子情報通信学会 情報・システムソサイエティ査読功労賞

    2014年06月

     詳細を見る

    種別:その他