Prof. Jenq-Neng Hwang, IEEE Fellow
University of Washington, USA
Dr. Jenq-Neng Hwang received the BS and MS degrees, both in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1981 and 1983 separately. He then received his Ph.D. degree from the University of Southern California. In the summer of 1989, Dr. Hwang joined the Department of Electrical and Computer Engineering (ECE) of the University of Washington in Seattle, where he has been promoted to Full Professor since 1999. He served as the Associate Chair for Research from 2003 to 2005, and from 2011-2015. He is currently the Associate Chair for Global Affairs and International Development in the ECE Department. He is the founder and co-director of the Information Processing Lab., which has won several AI City Challenges awards in the past years. He has written more than 350 journal, conference papers and book chapters in the areas of machine learning, multimedia signal processing, and multimedia system integration and networking, including an authored textbook on "Multimedia Networking: from Theory to Practice," published by Cambridge University Press. Dr. Hwang has close working relationship with the industry on multimedia signal processing and multimedia networking.
Dr. Hwang received the 1995 IEEE Signal Processing Society's Best Journal Paper Award. He is a founding member of Multimedia Signal Processing Technical Committee of IEEE Signal Processing Society and was the Society's representative to IEEE Neural Network Council from 1996 to 2000. He is currently a member of Multimedia Technical Committee (MMTC) of IEEE Communication Society and also a member of Multimedia Signal Processing Technical Committee (MMSP TC) of IEEE Signal Processing Society. He served as associate editors for IEEE T-SP, T-NN and T-CSVT, T-IP and Signal Processing Magazine (SPM). He is currently on the editorial board of ZTE Communications, ETRI, IJDMB and JSPS journals. He served as the Program Co-Chair of IEEE ICME 2016 and was the Program Co-Chairs of ICASSP 1998 and ISCAS 2009. Dr. Hwang is a fellow of IEEE since 2001.
Title: Electronic Visual Monitoring for the Smart Ocean
Abstract: With the increasing incorporation of cameras for fishery applications, such as underwater fish survey based on bottom/midwater trawls and/or ROVs, as well as electronic monitoring (EM) for catch accounting and/or compliance with catch retention requirements. Moreover, they can also enable a non-extractive and non-lethal approach to fisheries surveys and abundance estimation. The camera-based monitoring and sampling approaches not only can conserve depleted fish stocks but also provides an effective way to analyze a greater diversity of marine animals and environmental assessment. This approach, however, generates vast amounts of image/video data very rapidly, effective machine learning techniques to handle these big visual data are thus critically required to make such monitoring and sampling approaches practical. Thanks to many advanced deep learning and computer vision techniques, along with the help of powerful computing resources, many of these tasks can be reliably and real-time performed, a big step toward the smart ocean once these monitoring systems are deployed on every fishing vessel and real-time collecting/analyzing data anytime and anywhere on the ocean. In this talk, I will report some progresses jointly made with NOAA to develop a live fish counting, catch event detection, length measurement and species recognition system, based on the data collected using the Camtrawl, chute or rail camera systems.
Prof. Josiane Zerubia, IEEE Fellow
Josiane Zerubia has been a permanent research scientist at INRIA since 1989 and director of research since July 1995 (DR 1st class since 2002). She was head of the PASTIS remote sensing laboratory (INRIA Sophia-Antipolis) from mid-1995 to 1997 and of the Ariana research group (INRIA/CNRS/University of Nice), which worked on inverse problems in remote sensing and biological imaging, from 1998 to 2011. From 2012 to 2016, she was head of Ayin research group (INRIA-SAM) dedicated to models of spatio-temporal structure for high resolution image processing with a focus on remote sensing and skincare imaging. She has been professor (PR1) at SUPAERO (ISAE) in Toulouse since 1999.
Before that, she was with the Signal and Image Processing Institute of the University of Southern California (USC) in Los-Angeles as a postdoc. She also worked as a researcher for the LASSY (University of Nice/CNRS) from 1984 to 1988 and in the Research Laboratory of Hewlett Packard in France and in Palo-Alto (CA) from 1982 to 1984. She received the MSc degree from the Department of Electrical Engineering at ENSIEG, Grenoble, France in 1981, the Doctor of Engineering degree, her PhD and her ‘Habilitation’, in 1986, 1988, and 1994 respectively, all from the University of Nice Sophia-Antipolis, France.
She is a Fellow of the IEEE (2003- ) and IEEE SP Society Distinguished Lecturer (2016-2017). She was a member of the IEEE IMDSP TC (SP Society) from 1997 till 2003, of the IEEE BISP TC (SP Society) from 2004 till 2012 and of the IVMSP TC (SP Society) from 2008 till 2013. She was associate editor of IEEE Trans. on IP from 1998 to 2002, area editor of IEEE Trans. on IP from 2003 to 2006, guest co-editor of a special issue of IEEE Trans. on PAMI in 2003, member of the editorial board of IJCV from 2004 till March 2013 and member-at-large of the Board of Governors of the IEEE SP Society from 2002 to 2004. She was also associate editor of the on-line resource « Earthzine » (IEEE CEO and GEOSS) from 2006 to mid-2018. She has been a member of the editorial board of the French Society for Photogrammetry and Remote Sensing (SFPT) since 1998, of the Foundation and Trends in Signal Processing since 2007 and member-at-large of the Board of Governors of the SFPT since September 2014. Finally, she has been a member of the senior editorial board of the IEEE Signal Processing Magazine since September 2018.
She was co-chair of two workshops on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR'01, Sophia Antipolis, France, and EMMCVPR'03, Lisbon, Portugal), co-chair of a workshop on Image Processing and Related Mathematical Fields (IPRM'02, Moscow, Russia), technical program chair of a workshop on Photogrammetry and Remote Sensing for Urban Areas (Marne La Vallée, France, 2003), co-chair of the special sessions at IEEE ICASSP 2006 (Toulouse, France) and IEEE ISBI 2008 (Paris, France), publicity chair of IEEE ICIP 2011 (Brussels, Belgium), tutorial co-chair of IEEE ICIP 2014 (Paris, France), general co-chair of the workshop EarthVision at IEEE CVPR 2015 (Boston, USA) and a member of the organizing committee and plenary talk co-chair of IEEE-EURASIP EUSIPCO 2015 (Nice, France). She also organized and chaired an international workshop on Stochastic Geometry and Big Data at Sophia Antipolis, France, in 2015. She was part of the organizing committees of the workshop EarthVision (co-chair) at IEEE CVPR 2017 (Honolulu, USA) and GRETSI 2017 symposium (Juan les Pins, France). She is scientific advisor and co-organizer of ISPRS 2020 congress (Nice, France) and co-technical chair of IEEE-EURASIP EUSIPCO 2021 (Dublin, Ireland).
Her main research interest is in image processing using probabilistic models. She also works on parameter estimation, statistical learning and optimization techniques.
Title: Wide area aerial surveillance: A point process approach to multiple object detection and tracking
Abstract: In this talk, we combine the methods from
probability theory and stochastic geometry to put
forward new solutions to the multiple object detection
and tracking problem in high resolution remotely sensed
image sequences. First, we present a spatial marked
point process model to detect a pre-defined class of
objects based on their visual and geometric
characteristics. Then, we extend this model to the
temporal domain and create a framework based on
spatio-temporal marked point process models to jointly
detect and track multiple objects in image sequences. We
propose the use of simple parametric shapes to describe
the appearance of these objects. We build new, dedicated
energy based models consisting of several terms that
take into account both the image evidence and physical
constraints such as object dynamics, track persistence
and mutual exclusion. We construct a suitable
optimization scheme that allows us to find strong local
minima of the proposed highly non-convex energy.
As the simulation of such models comes with a high computational cost, we turn our attention to the recent filter implementations for multiple objects tracking, which are known to be less computationally expensive. We propose a hybrid sampler by combining the Kalman filter with the standard Reversible Jump MCMC. High performance computing techniques are also used to increase the computational efficiency of our method. We provide an analysis of the proposed framework. This analysis yields a very good detection and tracking performance at the price of an increased complexity of the models. Tests have been conducted both on high resolution satellite and drone image sequences.
Prof. Xudong Jiang
Nanyang Technological University, Singapore
Prof. Xudong Jiang received the B.Sc. and M.Sc. degree from the University of Electronic Science and Technology of China, in 1983 and 1986, respectively, and received the Ph.D. degree from Helmut Schmidt University Hamburg, Germany in 1997, all in electrical and electronic engineering. From 1986 to 1993, he worked as Lecturer at the University of Electronic Science and Technology of China where he received two Science and Technology Awards from the Ministry for Electronic Industry of China. He was a recipient of the German Konrad-Adenauer Foundation young scientist scholarship. From 1993 to 1997, he was with Helmut Schmidt University Hamburg, Germany as scientific assistant. From 1998 to 2004, He worked with the Institute for Infocomm Research, A*Star, Singapore, as Senior Research Fellow, Lead Scientist and appointed as the Head of Biometrics Laboratory where he developed a fingerprint verification algorithm that achieved the fastest and the second most accurate fingerprint verification in the International Fingerprint Verification Competition (FVC2000). He joined Nanyang Technological University, Singapore as a faculty member in 2004 and served as the Director of the Centre for Information Security from 2005 to 2011. Currently, Dr Jiang is a tenured Associate Professor in School of Electrical and Electronic Engineering, Nanyang Technological University. Dr Jiang has published over hundred research papers in international refereed journals and conferences, some of which are well cited on Web of Science. He is also an inventor of 7 patents (3 US patents), some of which were commercialized. Dr Jiang is a senior member of IEEE and has been serving as Editorial Board Member,Guest Editor and Reviewer of multiple international journals, and serving as Program Committee Chair, Keynote Speaker and Session Chair of multiple international conferences. His research interest includes pattern recognition, computer vision, machine learning, image analysis, signal/image processing, machine learning and biometrics.
Title: Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation
Abstract: Scene segmentation is a challenging task as it need classify every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this speech, I first present a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, I will present a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Finally, I will show the state-of-the-art performances achieved by the presented techniques on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.
Prof. Yuri Rzhanov
University of New Hampshire, United States
with a Ph.D. in Physics and Mathematics from the Russian Academy
of Sciences, is a Research Professor in the Center for Coastal
and Ocean Mapping. He completed his thesis on nonlinear
phenomena in solid state semiconductors in 1983. Since joining
the center in 2000, he has worked on a number of signal
processing problems, including construction of large-scale
mosaics from underwater imagery, automatic segmentation of
acoustic backscatter mosaics, accurate measurements of
underwater objects from stereo imagery.
His research interests include development of algorithms and their implementation in software for 3D reconstruction of underwater scenes, automatic detection and abundance estimation of various marine species from imagery acquired from ROVs, AUVs, towed and handheld cameras.
Prof. Christine Fernandez-Maloigne
University of Poitiers, France
Christine Fernandez-Maloigne is currently Vice-Rector of Poitiers University, in charge of International Relations and director of a CNRS research federation (MIRES), which gathers 560 researchers in the south west of France, in the area of mathematics, image processing, computer graphic, computer science and communication systems. Her research activities are focused on colour imaging, including fundamental researches about introduction of human visual system models in multiscale colour image processes as well as practical applications for biomedical, patrimonial and audio-visual digital contents. Christine Fernandez-Maloigne is now appointed member of the National Council of the French Universities (CNU), secretary of division 8, Image technologies, of the CIE (International commission of Lighting) and deputy Editor-in-chief of JOSA A.
Assoc. Prof. Krzysztof Koszela
Poznan University of Life Sciences, Poland
Krzysztof Koszela is Associate Professor at the Institute of Biosystems Engineering, Department of Applied Informatics, Poznan University of Life Sciences. He is also Head of the Department of Applied Informatics, President of the Polish Society for ICT in Agriculture, Forestry and Food production, Board Member of The Food Cluster of Southern Wielkopolska Region.
His research fields and methods are Agri-food processing and storage (cereal grains, oil seeds, fruit and vegetables), Theory and applications of computational intelligence, including neural networks, similarity-based systems, relations with fuzzy systems, pattern recognition, artificial intelligence, selection of relevant information, visualization of multidimensional data and relations, meta-learning techniques. Software engineering tools and methods, object-oriented programming. Business management, management process, trends in management, leadership and management. Major approaches to management. Basic manager roles and skills.
involved in several research projects as a project
leader and author of numerous publications in the area
of agri-food processing.
He is the originator of many initiatives connecting the areas of science and business. He is a participant of numerous conferences, symposia and industry meetings related to the food sector in Poland.
His scientific achievements include over 95 publications, including 85 scientific articles and more than 78 scientific papers in international journals (he is also the review of several international journals).
Title: Artificial intelligence and its influence on our environment. A chance or a threat
Abstract: Artificial intelligence, wireless connection, automatization, biotechnology, nanotechnology, big data, autonomous cars - it is just a foretaste of what, is still ahead of us. However, it is difficult to predict what the future will look like after Industry 4.0. Robots and artificial intelligence will change the way of doing work. By 2030 it will influence as many as 800 million workplaces and up to 75% of professions, which will be done in 10 years, is still unknown for us. Along with progress in robotization nad automatization in the job market, a demand for soft skills among employees is growing. Creativity, ability to critical assessment of situation, leadership skills, decisiveness, managing priorities and time, problem solving or coordinating skills, are just a few of candidates’ skills, which will be looked for in the future. Of course assuming the scenario, in which humans will manage robots and not the other way round. Routine and repetitious activities, such as data analysis, operational and administrative issues or manual skills, will be less desirable, because robots will be able to do the above activities with proper precision and accuracy. The ability to select and effective acquisition of information will be more and more significant. Today already we are not able to acquire all information, which reaches us, and over the course of time and advancements in technology, there will be more and more information. Artificial intelligence, robots and digital devices are changing the way we do work. Ultimately robots can replace one fifth of employees hired on full time basis. However, at the same time, investments in new technologies is the necessity for companies. Already as many as half of the biggest companies in the world have initiated the implementation of processes automatization with the use of robots. According to various data, a couple of factors are accelerating the development of cognitive technologies and robotics. - First of all it is a growing amount of data, which companies have to process. In addition to the above, the development of internet and capabilities of computing clouds, make that companies are more and more present in digital sphere. Additionally, emerging machine learning algorithms allow the use of robots in new roles. All this causes that an increasing spending dynamics related to implementation of artificial intelligence systems (AI) in the years 2017-2021 is to reach 50%, and this means spendings on the level of 200 billion dollars altogether.