A System Approach to Adaptive Multi-modal Sensor Designs
Recently, a great deal of effort has been put into
adaptive and tunable multimodal sensor designs to address the
challenging problems of detecting and identifying targets in highly
cluttered, dynamic scenes. Whereas these efforts have created or will
soon create novel adaptive multimodal sensors, it is unfortunate they
are not up to the expectations from the real-world applications.
Historically, the development of such a new sensor system began with
overall specifications followed by specifications for the various
elements followed by component development, system integration and
test. For complex systems, this process is slow, expensive and
inflexible because of the large number of requirements, constraints and
design options that need to be resolved.
Therefore, we propose to demonstrate an iterative system approach to
adaptive multimodal sensor designs. This approach will be based on the
integration of tools developed by us for the physics-based simulation
of complex scenes and targets and modeling of sensors with a workflow
management system that enables the integration of hardware and software
modules. The goal is to reduce development time and system cost while
achieving better results through an iterative process that incorporates
simulation, evaluation and refinement of critical elements.
We use effective peripheral-fovea designs as examples of how tradeoffs
can be done within a system context. The
designs are inspired by the biological vision systems for achieving
real-time imaging with a hyperspectral/range fovea and panoramic
peripheral view. The designs and the related data exploitation
algorithms will be simulated and evaluated in our general framework.
The results of this project will be an optimized design for the
peripheral-fovea structure and a system model for how sensor systems
can be developed within a simulation context.
In this
research we will also study data fusion
of the newly designed sensors with other multimodal sensors, in
particular the
novel remote audio/video signal acquisition with laser Doppler
vibrometery, and
the long-range thermal/color sensors.
The PIs and other researchers at both CCNY and RIT will leverage their
expertise in data simulation and data management at RIT, sensor design
and data exploitation at CCNY to yield a system approach for
adaptive multimodal sensor designs. The combined hyperspectral
data/sensor simulation and management tools will support detailed
system simulation with synthetic image data from virtual instruments.
This data can then be used to evaluate design trade-offs, image
processing algorithms and sensor fusion using performance metrics that
can be specified for different scenarios.
Related
Publications
- T. Wang, Z.
Zhu, H. Rhody, A Smart Sensor with
Hyperspectral/Range Fovea and Panoramic Peripheral View. The 6th
IEEE
Workshop on Object Tracking and Classification Beyond and in the
Visible
Spectrum (OTCBVS) (in conjunction with CVPR'09)
, June 20, 2009.
- Y.
Qu, T. Wang and Z. Zhu, Remote
Audio/Video Acquisition for Human
Signature Detection, The 3rd IEEE CVPR Biometrics Workshop,
June 25,
2009.
- H. Tang and
Z. Zhu, Content-Based
3D Mosaics for Representing Videos of Dynamic Urban
Scenes, IEEE Transactions on
Circuits
and Systems for Video Technology, accepted, August 2008.
- Z. Zhu, Mobile
Sensors for Security and Surveillance, Journal
of Applied Security Research, the Haworth Press, vol 4, no
1&2:79–100, January 2009 (invited
paper).
- T. Wang and Z. Zhu, Intelligent
Multimodal and Hyperspectral Sensing for
Real-Time Moving Target Tracking, AIPR 2008: Multiple Image
Information
Extraction, Cosmos Club, Washington DC, October 15-17, 2008.
- T. Wang and Z. Zhu, Bio-Inspired Adaptive Hyperspectral
Imaging for Target Tracking, 2008 Symposium on Spectral Sensing
Research (ISSSR), June 23-27, 2008.
- Z.
Zhu, W. Li, E.
Molina and G. Wolberg, LDV Sensing and Processing for
Remote Hearing in a Multimodal Surveillance System, Chapter 4 in Multimodal
Surveillance: Sensors, Algorithms
and Systems, Z. Zhu and T. S. Huang (eds), ISBN-10:
1596931841, Artech House Publisher, July 2007, pp 59-90.
- W.
Li, M. Liu, Z. Zhu
and T. S. Huang, LDV Remote Voice
Acquisition and Enhancement, International
Conference on Pattern
Recognition (ICPR’06), Hong Kong, China, August 2006.
- Z. Zhu, E. M. Riseman, A. R. Hanson, Generalized
Parallel-Perspective Stereo Mosaics from
Airborne Videos, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 26, no. 2, Feb 2004, pp 226-237.
Related Patents
- Z. Zhu, Y. Qu, T. Wang, Vision-Aided Automated Vibrometry,
U.S. Prov. Patent Appl. No. 61/163,169, March 25, 2009
Principal
Investigators:
Professor Zhigang Zhu (PI), Department of Computer Science,
City College, City University of New York (CUNY)
Professor Harvey Rhody
(Co-PI), Center for Imaging Science, Rochester Institute of
Technology
Team Members:
Dr. Yufu Qu, Postdoc Fellow, Department of Computer
Science,
The CUNY City College
Tao Wang, PhD student,
Department
of Computer Science, The CUNY Graduate Center
Edgardo Molina, PhD student, Department
of Computer Science, The CUNY Graduate Center
Hao
Tang, PhD student, Department
of Computer Science, The CUNY Graduate Center
Bob Krzaczek, Software Architect, Center for Imaging Science,
Rochester Institute of
Technology
Bill
Hoagland,
System Programmer, Center for Imaging Science, Rochester Institute of
Technology
Related Grant:
- AFOSR DISCOVERY CHALLENGE THRUSTS (DCTs),
Award
#FA9550-08-1-0199, A System Approach to Adaptive Multi-modal
Sensor Designs, PI:
Professor Zhigang Zhu (CCNY); Co-PI:
Professor Harvey Rhody (RIT); 04/01/2008 –
03/31/2011.