The IEEE AP-S Distinguished Lecturer Program
The IEEE AP-S Distinguished Lecturer Program (DLP) provides experts, the Distinguished Lecturers (DLs), who are financially supported to visit active AP-S Chapters around the world and give talks on topics of interest and importance to the AP community.
Each active Chapter can request a maximum of two DL visits per year. One of the two visits may be from a former DL. A third DL per year may be obtained when a Chapter organizes a workshop and requests a DL for it. If a DL visits a particular Chapter, and then also visits one or more nearby chapters as part of the same trip, the visit will normally count towards the DL allotment of the Chapter that originally invited the DL, and not towards the other nearby Chapters.
A visit request by a Chapter must be approved by the Chair of the Distinguished Lecturer Program (DLP) prior to the Chapter making an official commitment to a DL. Permission for additional DL visits to a Chapter is contingent on funds, and needs approval by the DLP Chair and the AP-S Treasurer.
Normally DLs visit AP-S Chapters, but Sections or Councils may also be visited with permission of the DLP Chair, who should receive some assurance that a reasonable number of AP-S members will be present at the meeting. A DL visit to a Student Branch Chapter of AP-S requires special approval by the DLP Chair. Such visits will be allowed if there is evidence of significant potential attendance, as well as approval of the local AP-S Chapter, if one exists in the local area.
It is allowed for a DL to combine a Chapter visit with a visit to another organization or event (such as a company, a conference, etc.), but only the part of the trip that relates to the Chapter visit will be funded by the Distinguished Lecturer Program. The AP-S Society will normally reimburse travel expenses incurred by Distinguished Lecturers up to $1,250 for presentations to AP-S Chapters located inside the DL's IEEE geographic region. Travel expenses for trips outside the DL’s geographic region are reimbursable up to $2,500. For each additional Chapter visited on the same trip, within the same geographical region, travel expenses are reimbursable up to $1,250. There is no limit on the number of Chapters that may be visited during each trip, but approval from the DLP Chair must be obtained for each Chapter visited. A visual representation to assist with the planning of DL talks is available below.
The Chair appoints DLs upon advice from the Distinguished Lecturer Committee that selects the DLs among the candidates, who may be invited by the Committee, nominated by someone, or self-nominated. The Chapters are strongly encouraged to use this program as a means to make their local AP community aware of the most recent scientific and technological trends.
The deadline for nominations of candidates to be considered as Distinguished Lecturers is April 1. The documentation required to nominate a candidate can be found here [embed link to document in the text to the left]
The Chair of the Distinguished Lecturer Program is
Danilo Erricolo, Ph.D.
Professor and Director of the Andrew Electromagnetics Laboratory
Adjunct Professor of Bioengineering
Chair, IEEE AP-S Distinguished Lecturer Program
University of Illinois at Chicago
Department of Electrical and Computer Engineering
1020 SEO (MC 154)
851 South Morgan Street
Chicago, IL 60607-7053
Phone: (+1) 312 996 5771
Fax: (+1) 312 996 6465
AP-S Distinguished Lecturer Appointments
Dr. Christophe Caloz
Professor, Electrical Engineering
Canada Research Chair
École Polytechnique de Montréal
Building Lassonde, Office M6025
2500, ch. de Polytechnique
Montréal (Québec), H3T 1J4, Canada
Metamaterials: Past, Present and Future
In the history of humanity, scientific progress has frequently been associated with the discovery of novel substances or materials. Metamaterials represent a recent incarnation of this evolution. As suggested by their prefix “meta”, meaning “beyond” in Greek, metamaterials (artificial materials owing their properties to sub-wavelength but supra-atomic scatterers) even transcend the frontiers of nature, to offer unprecedented properties with far-reaching implications in modern science and technology.
This talk presents some research highlights in electromagnetic metamaterials over the past decade, with emphasis on applications providing performances or functionalities that outperform state-of-the-art technologies. The first part of the talk reviews some history, principles and properties of metamaterials from a global perspective. The second part presents a series of microwave metamaterial applications exploiting these properties, in particular negative refraction, near-zero index propagation, coupling amplification, full-space scanning leakage radiation, and agile temporal and spatial dispersions. This part culminates with the introduction of the concept of radio real-time signal processing, enabled by “phasers” (components with fully designable group delay versus frequency responses), which might play a central role in tomorrow’s radio. The third part introduces magnet-less non-reciprocal metamaterials (MNMs), which have been recently invented and developed in the speaker’s group. While non-reciprocal gyrotropic materials, first reported by Faraday in 1845, have always required a biasing magnet to date, MNMs, which are composed of transistor-loaded rings mimicking electron-spin precession in ferrites, only require a biasing voltage, and are therefore fully compatible with semiconductor technology. This new class of metamaterials might therefore be considered a breakthrough and seem to have a strong potential for commercial electronic and photonic applications. Finally, the talk explores perspectives for next-generation of metamaterials, which will arguably be muli-scale (micro, nano, atomic) and multi-substance (e.g. semiconductors, ferroelectrics, magnetic nanoparticles, multiferroics, carbon nanotubes, graphene, etc.) in nature.
Leaky-Wave Antennas: the Dawn of a New Era!
Leaky-wave antennas (LWAs) have a history of over 70 years. This history started with a patent on a leaky slit waveguide by Hansen in 1940, and the field was then really developed in the late 1950ies and 1960ies by the Brooklyn Polytechnic (now NYU Poly) microwave group, involving Oliner, Tamir and Hessel. Since then, much LWA research has then been carried out by various groups around the world. However, despite some of their unique features, LWAs have been plagued by fundamental issues that have limited their utilization in practical systems. These issues have been recently solved, bringing us to the doorstep of a new area in LAWs.
The unique benefits of LWAs is that they provide high directivity and (frequency or electronic) beam scanning with much smaller form factor, lower cost and higher gain than antenna arrays, as they do not require a complex feeding network. In uniform LWAs, these benefits are annihilated by the restriction of forward-only scanning. Periodic LWAs have been capable of radiating both in forward and backward directions, using leaky space harmonics, since their introduction by Rotman in the late 1950ies. However, their aforementioned LWA benefits have been countered by the collapse of the radiation efficiency at broadside. A definite solution to this persistent issue came in 2002 with the advent of metamaterial Composite Right/Left-Handed (CRLH) LWAs, the first LWAs capable of efficient full-space scanning, which made LWAs potentially superior to arrays. The secrets for this long sought solution were revealed by the groups of D. R. Jackson and of the speaker over the past decade, and then extended to non-metamaterial LWAs: 1) presence of two resonators in the unit cell, 2) closure of the open-stop band by mutual cancellation of the two resonances, 3) satisfaction of a Heaviside-like condition to equalize gain through broadside. Moreover, fundamental relations between the (transverse and longitudinal) symmetries of the periodic unit cell and the LWA properties were recently unveiled by the speaker and collaborators at the University of Duisburg, providing prescriptions completing the broadside radiation ones for most efficient and diverse LWA designs. The talk first overviews historical milestones, explains the physics of LWAs (including their fundamental connection with the Smith-Purcell effect in particle physics) and provides basic electromagnetic tools for their analysis. Next, it presents and illustrates the solution to the broadside radiation issue as well as the unit cell symmetry rules. Finally, it demonstrates a number of novel concepts, structures, systems and applications, including active LWA beam forming, gain enhancement via power recycling, LWA direction-of-arrival estimation, non-reciprocal LWA diplexers, direction diversity enhanced MIMO systems, smart reflectors, graphene-tunable THz antennas, real-time spectrogram analyzers, and vortex beam launchers for orbital angular momentum multiplexing.
Radio Analog Signal Processing for Tomorrow’s Radio
Today's exploding demand for faster, more reliable and ubiquitous wireless connectivity poses unprecedented challenges in radio technology. To date, the predominant approach has been to put increasing emphasis on digital signal processing (DSP). However, while offering device compactness and processing flexibility, DSP suffers of fundamental limitations, such as poor performance above the K band, high-cost A/D conversion, low processing speed and high power consumption.
Recently, Radio Analog Signal Processing (R-ASP) has emerged as a novel paradigm to potentially overcome these issues, and hence address the aforementioned challenges. R-ASP processes radio signals in their pristine analog form and in real time, using “phasers”. A phaser is a temporally – and sometimes also spatially – dispersive electromagnetic structure whose group delay is designed so as to exhibit the required (quasi-arbitrary) frequency function to perform a desired operation, such as for instance real-time Fourier transformation. Phasers can be implemented in Bragg-grating, chirped-waveguide, magnetostatic-wave and acoustic-wave technologies. However, much more efficient phasers, based on 2D/3D metamaterial structures and cross-coupled resonator chains, were recently introduced, along with powerful synthesis techniques. These phasers can manipulate the group delay of electromagnetic waves with unprecedented flexibility and precision, and thereby enable a myriad of applications in communication, radar, instrumentation and imaging, with superior performance or/and functionality. This talk presents an overview of R-ASP technology, including dispersion-based processing principles, historical milestones, phasing fundamentals, phaser synthesis, and many applications.
Graphene, a monolayer of carbon atoms arranged in a honeycomb lattice, is the first truly two-dimensional material ever produced by humanity. For this reason, and also due to its exceptional mechanical, thermal, chemical and electronic properties, it won Geim and Novoselv the Nobel Prize in Physics in 2010, only 6 years after their first experimental report on the topic. Since then, this Holy Grail material has spurred huge interest in both the scientific and engineering communities, with over 1000 papers published per month on graphene related topics.
In the area of electronics, during its first lustrum (starting in 2004), graphene research was mostly focused on transport devices (transistors, mixers, switches, etc.), exploiting the high mobility and ambipolarity of the material for higher performance or functionality. However, many researchers have recently directed their attention to the potential of graphene for electromagnetics, due to the discovery of novel phenomena and to the recent availability of large area graphene sheets. One of the key interests in this area graphene’s capability to provide tunable material properties via simple or patterned electrostatic gating. Moreover, fascinating and unprecedented effects occur when the graphene is immersed in a static magnetic field, in which case the electron and hole charge carriers are drawn into cyclotron orbits described by a tensorial conductivity. This area may be called graphene magneto-plasmonics, as graphene essentially behaves as a two-dimensional electron or hole gas. At microwaves, graphene is a transparent conductor whose phase difference between the right-handed and left-handed circularly polarized eigenstates is so significant that electromagnetic waves traveling across it experience giant Faraday rotation, with the possibility of voltage-induced Faraday reversal based on ambipolarity. This phenomenon enables a diversity of unique Faraday devices, such as gyrators, isolators, non-reciprocal radomes and perfect electromagnetic boundaries. At terahertz frequencies, graphene supports tunable surface magneto-plasmons with exotic properties, such as for instance directional concentration and splitting counter-propagating modes, strongly depending on the nature of doping (chemical or electrical). These magneto-plasmons might pave the way for efficient non-reciprocal terahertz electromagnetic components, which are critically missing today. The talk first recalls the fundamentals of graphene and describes some key electronic applications. Next, it introduces magneto-plasmonics, and sequentially presents the microwave Faraday rotation and the terahertz surface magneto-plasmonic phenomenology and applications. Finally, it discusses some multi-scale and multi-physics metamaterial structures involving graphene as a gyrotropic element.
Localized Waves or Molding Electromagnetic Waves
Localized waves (LW), also sometimes called non-diffractive waves or non-diffractive beams, are currently spurring revived interest in the radio and optical communities. A LW is characterized by a strong confinement of the field on a distance that is proportional to the size of the radiating aperture. LWs are solutions to the wave equation. There exist a great diversity of LWs, exhibiting various and fascinating properties. For instance, Bessel LWs exhibit a Bessel constant cross section, vortex LWs feature spiral wave fronts, i.e. carry orbital angular momentum (OAM), which may be applied to OAM multiplexing or particle tweezing, X-LWs are pulsed Bessel waves, of order larger than one and also carrying OAM, that may be designed to produce superluminal centroids, and Airy LWs are accelerated beams, following prescribed curved trajectories.
Until recently, LWs have been mostly restricted to theoretical studies, and have been little exploited in practical applications. However, technological progress in optics technology, where LWs are generally produced by sophisticated spatial light modulators, has brought the area of LWs to the forefront of the stage. At microwave, millimeter-wave and terahertz frequencies, other approaches are required to generate LWs. However, two promising roads – metasurfaces and, more recently, leaky-wave antennas – have been recently opened to meet this new challenge. Moreover, the group of the speaker has introduced two systematic techniques to synthesize metasurfaces producing arbitrary LWs within the limits of the laws of physics: a spatial technique, based on electromagnetic boundary conditions and providing the metasurface susceptility and polarizablities, and a spectral technique, based on the conservation of the total wave momentum and providing the metasurface transfer function in phase and magnitude; the latter includes a reverse propagator technique which allows to control LWs are an arbitrary distance from the source. The talk will first present the fundamentals of LWs and describe some of the most common LWs. It will next introduce the aforementioned spatial and spectral synthesis techniques, and demonstrate their unprecedented capabilities via several examples of exotic waves existing either in the Fresnel region or in the Frauenhofer region of the aperture. Then, a number of metasurface and antenna implementations and applications will be presented. Applications pertaining to communications, security, sensing, imaging, spectroscopy biotechnology, nanotechnology, and astronomy will be presented or discussed.
Christophe Caloz received the Diplôme d'Ingénieur en Électricité and the Ph.D. degree from École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, in 1995 and 2000, respectively. From 2001 to 2004, he was a Postdoctoral Research Fellow at the Microwave Electronics Laboratory, University of California at Los Angeles (UCLA). In June 2004, Dr. Caloz joined École Polytechnique of Montréal, where he is now a Full Professor, the holder of a Canada Research Chair (CRC) in Metamaterials and the head of the Electromagnetics Research Group. He has authored and co-authored over 500 technical conference, letter and journal papers, 12 books and book chapters, and he holds many patents. His works have generated over 11,000 citations. In 2009, he co-founded the company ScisWave, which develops CRLH smart antenna solutions for WiFi. Dr. Caloz received several awards, including the UCLA Chancellor’s Award for Post-doctoral Research in 2004, the MTT-S Outstanding Young Engineer Award in 2007, the E.W.R. Steacie Memorial Fellowship in 2013, the Prix Urgel-Archambault in 2013, and many best paper awards with his students at international conferences. He is an IEEE Fellow. His research interests include all fields of theoretical, computational and technological electromagnetics, with strong emphasis on emergent and multidisciplinary topics, including particularly metamaterials, nanoelectromagnetics, exotic antenna systems and real-time radio.
Prof. Steven Gao
Chair of RF and Microwave Engineering
School of Engineering and Digital Arts
University of Kent
Canterbury CT2 7NZ
Low-Cost Smart Antennas
Smart antennas are the key technology for wireless communications and radars. They can adjust their radiation patterns adaptively, i.e., forming maximum radiation towards the desired users and nulls towards the interference sources. Thus, they can improve the capacity of wireless communication networks significantly, increase the spectrum efficiency and reduce the transmit power. Traditionally, smart antennas are, however, too complicated, bulky, heavy and expensive for civil applications. For commercial applications, it is very important to reduce the cost, size, mass and power consumption of smart antennas.
This lecture will first give an introduction to smart antennas and their types such as passive and active phased arrays, digital beamforming smart antennas, adaptive arrays, multi-beam antennas, beam-switching antennas, multiple inputs and multiple outputs (MIMO) antenna systems, etc. The basic principles of each type of smart antennas will be explained. The advantages and disadvantages of each type of smart antennas will be highlighted.
The lecture will then describe different types of low-cost smart antenna technologies, such as Electrically-Steerable Parasitic Array Radiator (ESPAR) antenna, compact MIMO antennas, beam-switching array antennas and low-cost phased arrays. Many practical examples of antenna configurations and designs will be shown, explained and their performance discussed. These will include folded-monopole ESPAR (FM-ESPAR) for wireless communications, high-gain ESPAR using small director array, small-size MIMO, beam-switching reflectarray antennas for satellite communications, low-cost phased array antennas, etc.
Due to the special environment of space and the launch vehicle dynamics to get there, spacecraft antenna requirements and designs are quite different from those of terrestrial antennas. Onboard a satellite, there are a number of different antennas and arrays for various functions, such as Telemetry, Tracking and Command (TT&C), high-speed data downlink, GPS navigation and positioning, remote sensing, inter-satellite links, deep-space communications, etc. Since the launching of 1st man-made satellite “Sputnik” in 1957, a large variety of antennas and arrays have been developed for space applications and the antennas employ different frequency bands including UHF/VHF, L, S, C, X, Ku, Ka and V band.
This lecture will first explain the satellites, orbits, the space environment and special requirements of space antennas. Different types of satellites and orbits will be explained. Space environments such as extreme thermal conditions, materials outgassing, radiation environment, multipaction effects, passive inter-modulation, corona phenomenon, electro-static charging, atomic oxygen, etc, will be discussed and their impact on the antenna designs will be explained. Other issues, e.g., the interactions amongst antennas, satellite bodies and solar panels, will also be described. Key challenges for space antenna designs will be illustrated.
The lecture will then provide an overview of space antennas developed for different applications. This part will show many examples of the real-world space antennas for different applications such as TTC, navigations, high-speed data downlink, GPS reflectometry remote sensing, inter-satellite links, deep-space communications, etc. The operating principles of each antenna will be explained and their performance will be discussed. Finally, an outlook to the future development of space antennas will be presented.
Multi-Band Antennas for Global Navigation Satellite Systems (GNSS) Receivers
Global Navigation Satellite System (GNSS) is a satellite based radio navigation system that provides precise information about the spatial coordinates (longitude, latitude and altitude) of an object anywhere on the earth or in the air. Global Positioning System (GPS) is the single fully operational navigation system available for commercial and military users around the globe while Galileo, GLONASS and COMPASS (European, Russian and Chinese respectively) are in the development stage with GLONASS operating with partial capability. GNSS operates at different frequency bands including L1, L2, L5, E5, etc. The use of a compact multi-band antenna instead of multiple single-band antennas can reduce the size, mass and cost of GNSS receivers significantly. During recent years, a variety of multi-band antennas have been developed for GNSS receivers.
This lecture will give an introduction to the GNSS system and the antenna design requirements for GNSS receivers. Various issues such as multipath mitigation, phase center stability, compact size, multi-band operation, etc, will be discussed. Techniques of multipath mitigation such as choke rings, electromagnetic-band-gap (EBG) antennas, etc, will be presented and their principles will be explained.
The lecture will then give a review of compact multi-band antennas and arrays for GNSS receivers. Many examples of GNSS antenna designs will be shown, and the antenna configurations and design principles will be explained. These will include the dual-band multipath mitigating GNSS antenna using the cross plate reflector ground plane (CPRGP), multi-band QHA antennas, active multi-band antennas, small multi-band GNSS array antennas, high-gain beam-switching multi-band GNSS arrays, etc. The performance of each antenna will be described.
Antennas for Synthetic Aperture Radars
Synthetic aperture radars (SAR) is an imaging radar which produces high resolution radar images of the earth’s surface by using microwave signals. Unlike optical sensors which are limited by day lights and weather conditions, SAR can be used day and night and can see through clouds. SAR has important wide-ranging applications for earth observations in remote sensing and mapping of the surfaces of both the Earth and other planets. SAR is used in various fields of research ranging from oceanography, geology, to archaeology. Antenna for SAR is usually very complicated and expensive. SAR antenna is often one of the most expensive components onboard the aircraft or spacecraft.
This lecture will first give an introduction to SAR systems and how SAR works. Key parameters of SAR systems such as range resolution, azimuth resolution, frequency bands, etc, will be explained. Different SAR modes such as stripmap, scanSAR, spotlight and interferometric SAR (InSAR) will be described. SAR system design considerations and key challenges for SAR antenna designs will also be presented.
The lecture will then provide a review of antennas for SAR. An overview of antenna development for space-borne SAR will be illustrated and some examples will be given. The design principles of each example antenna will be explained and their performance discussed. Finally the lecture will give a discussion of future development such as the digital beam-forming SAR for satellite constellations, etc.
Steven (Shichang) Gao was born in Anhui, P.R. China. He received a PhD from Shanghai University, China, in 1999. He is a Professor and Chair of RF and Microwave Engineering at School of Engineering and Digital Arts, University of Kent, UK. His research covers antennas, smart antennas, phased arrays, space antennas, RF/microwave and mm-wave circuits and systems, satellite communications, synthetic-aperture radars, UWB radars and GNSS receivers.
He started his career at China Research Institute of Radiowave Propagation in 1994-1996. Afterwards, he worked as a Post-doctoral Research Fellow at National University of Singapore in 1999-2001, a Research Fellow at Birmingham University (UK) in 2001-2002, a Senior Lecturer (2002-2006), Reader (2006-2007) and Head of Active Antenna and RF Group (2002-2007) at Northumbria University (UK) and a Senior Lecturer and Head of Space Antennas and RF System Group (2007-2012) at Surrey Space Center, University of Surrey, UK. Also he was a Visiting Scientist at Swiss Federal Institute of Technology (ETHZ, Switzerland) in 2003, a Visiting Professor at University of California at Santa Barbara (US) in Jan-July 2005, and a Visiting Fellow at Chiba University (Japan) in Aug-Sept 2005 and June-July 2013. Since Jan. 2013, he joined the University of Kent as a Full Professor and became a Chair of RF and Microwave Engineering since Feb. 2014.
He is General Co-Chair of Loughborough Antennas and Propagation Conference (LAPC), UK, 2013, and Chair of Special Session on “Satellite Communication Antennas” in IEEE/IET International Symposium on Communication Systems and Networks, 2012, etc. He is a Guest Editor of IEEE Transactions on Antennas and Propagation for a Special Issue on "Antennas for Satellite Communication"(Feb. 2015 issue). He is an Invited Speaker at IWAT'2014 (Sydney, 2014), SOMIRES'2013 (Japan, 2013), APCAP'2014 (Harbin, 2014), etc. He is an Associate Editor of Radio Science and also the Editor-in-Chief of Wiley Book Series in Microwave and Wireless Technologies. He is a Fellow of Institute of Engineering and Technology (IET), UK.
He has two book including Space Antenna Handbook (Wiley, 2012, co-editors: Imbriale and Boccia) and Circularly Polarized Antennas (Wiley-IEEE Press, 2014, co-authors: Luo and Zhu), published over 180 technical papers, 10 book chapters and holds three patents in smart antennas and RF. He received "URSI Young Scientist Award”, 2002, “JSPS Fellowship Award”, Japan, 2005, “Best Paper Award”, LAPC, UK, 2012, “JSPS Fellowship Award”, Japan, 2013, etc. He has been a leader and principal investigator of a number of research projects in areas of smart antennas for satellite communications on the move, space antennas, compact-size low-cost smart antennas for wireless communications, phased arrays for synthetic aperture radars, active integrated antennas for mobile communications, millimeter-wave antennas arrays, high-efficiency RF/microwave power amplifiers, UWB radars, GNSS receiver front end, adaptive small-size multi-band antennas for mobile phones, etc.
Prof. Qing Huo Liu
Department of Electrical and Computer Engineering
Durham, NC 27708, USA
Multiscale Computational Electromagnetics and Applications
Electromagnetic sensing and system-level design problems are often multiscale and very challenging to solve. They remain a significant barrier to system-level sensing and design optimization for a foreseeable future. Such multiscale problems often contain three electrical scales, i.e., the fine scale (geometrical feature size much smaller than a wavelength), the coarse scale (geometrical feature size greater than a wavelength), and the intermediate scale between the two extremes. Most existing commercial solvers are based on single methodologies (such as finite element method or finite-difference time-domain method), and are unable to solve large multiscale problems. We will present our recent work in solving realistic multiscale system-level EM design simulation problems in time domain. The discontinuous Galerkin method is used as the fundamental framework for interfacing multiple scales with finite-element method, spectral element method, and finite difference method. Numerical results show significant advantages of the multiscale method.
Subsurface Sensing and Super-Resolution Imaging: Application of Computational Acoustics and Electromagnetics
Acoustic/seismic and electromagnetic waves have widespread applications in geophysical subsurface sensing and imaging. In these applications, often the problems of understanding the underlying wave phenomena, designing the sensing and imaging measurement systems, and performing data processing and image reconstruction require large-scale computation in acoustics and electromagnetics. It is very challenging to solve such problems with the traditional finite difference and finite element methods. In this presentation, several high-performance computational methods and super-resolution imaging in acoustics and electromagnetics will be discussed along with their applications in oil exploration and subsurface imaging.
Progress and Challenges in Microwave Imaging and Microwave Induced Thermoacoustic Tomography
Breast cancer imaging by microwaves bas been investigated intensively over the past two decades due to the potentially high contrasts in permittivity and conductivity between malignant tumors and normal breast tissue. In comparison with the conventional ultrasound imaging where the acoustic impedance contrast between malignant tumors and normal breast tissue is low (typically a few percent), the dielectric contrast is indeed one to two orders of magnitude higher. Nevertheless, progress toward a clinically mature system for microwave breast imaging is painfully slow, primarily due to the low resolution of microwaves that can provide adequate penetration only at a relatively low frequency. We will describe challenges in achieving such a system, and ways to improve the resolution of microwave imaging. In the meantime, recent progress in microwave induced thermoacoustic tomography (MITAT) provides a new impetus for combining microwave and ultrasound modalities. In MITAT, we use millisecond-pulsed microwaves to produce ultrasound through thermal expansion, thus the induced ultrasound source represents the high contrast in electrical conductivity, while the collected ultrasound signals provide the high resolution corresponding to a short wavelength of ultrasound. We will describe our recent progress in both microwave imaging and MITAT, in both computational methods and system development.
Spectral Element Method for Nanophotonics
Nanophotonics is a major technological frontier with numerous new applications. However, a significant challenge in design optimization of nanophotonic devices is the huge computational costs in large-scale simulations. Advances in high-precision, high-efficiency computational methods will have significant impact on this emerging area. In this presentation, we will discuss our recent efforts to improve the methods for computational electromagnetics in nanophotonics. Particular topics will include the spectral-element method and spectral integral method in frequency domain for Maxwell's equations with applications in photonic crystals and plasmonics, and for nonlinear effects such as second harmonic generation. We use the spectral element method in the frequency domain for the simulation of nonlinear optical effects and the associated second harmonic generation (SHG). In most materials the SHG effect is weak in general because their nonlinear optical coefficients are usually small. Moreover, as optical materials are usually dispersive, there is a phase mismatch between the fundamental frequency and second harmonic fields, further weakening the SHG effect. With our accurate and efficient computational method, we design an air-bridge multiple layer photonic crystal slab based on the structure of GaAs/AlAs distributed Bragg reflector. We show that the SHG effect can be enhanced by ten orders of magnitude.
Qing Huo Liu (S’88-M’89-SM’94-F’05) received his B.S. and M.S. degrees in physics from Xiamen University in 1983 and 1986, respectively, and Ph.D. degree in electrical engineering from the University of Illinois at Urbana-Champaign in 1989. His research interests include computational electromagnetics and acoustics, and their applications in inverse problems, geophysics, nanophotonics, and biomedical imaging. He has published over 230 refereed journal papers and 300 conference papers in conference proceedings. His H index is 42 and has been cited over 7000 times (Google Scholar). He was with the Electromagnetics Laboratory at the University of Illinois at Urbana-Champaign as a Research Assistant from September 1986 to December 1988, and as a Postdoctoral Research Associate from January 1989 to February 1990. He was a Research Scientist and Program Leader with Schlumberger-Doll Research, Ridgefield, CT from 1990 to 1995. From 1996 to May 1999 he was an Associate Professor with New Mexico State University. Since June 1999 he has been with Duke University where he is now a Professor of Electrical and Computer Engineering.
Dr. Liu is a Fellow of the IEEE, a Fellow of the Acoustical Society of America. Currently he serves as the Deputy Editor in Chief of Progress in Electromagnetics Research, an Associate Editor for IEEE Transactions on Geoscience and Remote Sensing, and an Editor for the Journal of Computational Acoustics. He was recently a Guest Editor in Chief of the Proceedings of the IEEE for a 2013 special issue on large-scale electromagnetics computation and applications. He received the 1996 Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House, the 1996 Early Career Research Award from the Environmental Protection Agency, and the 1997 CAREER Award from the National Science Foundation.
Dr. Edmund K. Miller
597 Rustic Ranch Lane
Lincoln, CA 95648
Using Model-Based Parameter Estimation to Increase the Efficiency and Effectiveness of Computational Electromagnetics
Science began, and largely remains, an activity of making observations and/or collecting data about various phenomena in which patterns may be perceived and for which a theoretical explanation is sought in the form of mathematical prescriptions. These prescriptions may be non-parametric, first-principles generating models (GMs), such as Maxwell’s equations, that represent fundamental, irreducible descriptions of the physical basis for the associated phenomena. In a similar fashion, parametric fitting models (FMs) might be available to provide a reduced-order description of various aspects of the GM or observables that are derived from it. The purpose of this lecture is to summarize the development and application of exponential series and pole series as FMs in electromagnetics. The specific approaches described here, while known by various names, incorporate a common underlying procedure that is called model-based parameter estimation (MBPE).
MBPE provides a way of using data derived from a GM or taken from measurements to obtain the FM parameters. The FM can then be used in place of the GM for subsequent applications to decrease data needs and computation costs. An especially important attribute of this approach is that windowed FMs overlapping over the observation range make it possible to adaptively minimize the number of samples needed of an observable to develop a parametric model of it to a prescribed uncertainty. Two specific examples of using MBPE in electromagnetics are the modeling of frequency spectra and far-field radiation patterns. An MBPE model of a frequency response can provide a continuous representation to a specified estimation error of a percent or so using 2 or even fewer samples per resonance peak, a procedure sometimes called a fast frequency sweep, an example of which is shown below. Comparable performance can be similarly achieved using MBPE to model a far-field pattern. The adaptive approach can also yield an estimate of the data dimensionality or rank so that the FM order can be maintained below some threshold while achieving a specified FM accuracy. Alternatively, the data rank can be estimated from singular-value decomposition of the data matrix. FMs can be also be used to estimate the uncertainty of data while it is being generated or data that is pre-sampled that is being used for the FM computation. Topics to be discussed include: a preview of model-based parameter estimation; fitting models for waveform and spectral data; function sampling and derivative sampling; adaptive sampling of frequency spectra and far-field patterns; and using MBPE to estimate data uncertainty.
Conductance and susceptance of a fork monopole having unequal length arms from NEC and an MBPE model using a frequency sample and 4 frequency derivatives at the 2 different wavelengths (WLs) shown by the solid circles.
An Exploration of Radiation Physics
All external electromagnetic fields arise from the process of radiation. There would be no radiated, propagated or scattered fields were it not for this phenomenon. In spite of this self-evident truth, our understanding of how and why radiation occurs seems relatively superficial from a practical viewpoint. It’s true that physical reasoning and mathematical analysis via the Lienard-Wiechert potentials show that radiation occurs due to charge acceleration. It’s also true that it is possible to determine the near and far fields of rather complex objects subject to arbitrary excitation, making it possible to perform analysis and design of EM systems. However, if the task is to determine the spatial distribution of radiation from the surface of a given object from such solutions, the answer becomes less obvious.
One way to think about this problem might be to ask, were our eyes sensitive to X-band frequencies and capable of resolving source distributions a few wavelengths in extent, what would be the image of such simple objects as dipoles, circular loops, conical spirals, log-periodic structures, continuous conducting surfaces, etc. when excited as antennas or scatterers? Various kinds of measurements, analyses and computations have been made over the years that bear on this question. This lecture will summarize some relevant observations concerning radiation physics in both the time and frequency domains for a variety of observables, noting that there is no unanimity of opinion about some of these issues. Included in the discussion will be various energy measures related to radiation, the implications of Poynting-vector fields along and near wire objects, and the inferences that can be made from far radiation fields. Associated with the latter, a technique developed by the author called FARS (Far-field Analysis of Radiation Sources) will be summarized and demonstrated in both the frequency and time domains for a variety of simple geometries. Also to be discussed is the so-called E-field kink model, an approach that illustrates graphically the physical behavior encapsulated in the Lienard-Wiechert potentials as illustrated below. Brief computer movies based on the kink model will be included for several different kinds of charge motion to demonstrate the radiation process.
Depiction of the E-field lines for an initially stationary charge (a) that's abruptly accelerated from the origin to a speed v = 0.3c to then coast along the positive x-axis until time t1 (b) when it is abruptly stopped (c).
Verification and Validation of Computational Electromagnetics Software
For the past several decades, a computing-resource of exponentially expanding capability now called computational electromagnetics (CEM) has grown into a tool that both complements and relies on measurement and analysis for its development and validation, The growth of CEM is demonstrated by the number of computer models (codes) available and the complexity of problems being solved attesting to its utility and value. Even now, however, relatively few available modeling packages offer the user substantial on-line assistance concerning verification and validation. CEM would be of even greater practical value were the verification and validation of the codes and the results they produce be more convenient. Verification means determining that a code conforms to the analytical foundation and numerical implementation on which it is based. Validation means determining the degree to which results produced by the code conform to physical reality. Validation is perhaps the most challenging aspect of code development especially for those intended for general-purpose application where inexperienced users may employ the codes in unpredictable or inappropriate ways.
This presentation discusses some of the errors, both numerical an example of which is shown below, and physical, that most commonly occur in modeling, the need for quantitative error measures, and various validation tests that can be used. A procedure or protocol for validating codes both internally, where necessary but not always sufficient checks of a valid computation can be made, and externally, where independent results are used for this purpose, is proposed. Ideally, a computational package would include these capabilities as built-in modules for discretionary exercise by the user. Ways of comparing different computer models with respect not only to their efficiency and utility, but also to make more relevant intercode comparisons and to thereby provide a basis for code selection by users having particular problems to model, are also discussed. The kinds of information that can be realistically expected from a computer model and how and why the computed results might differ from physical reality are considered.A procedure called “Feature Selective Validation” that has received increasing attention in the Electromagnetic Compatibility Community as a means of comparing data sets will be summarized. The overall goal is to characterize, compare, and validate EM modeling codes in ways most relevant to the end user.
The magnitude of the finely sampled induced tangential electric field along the axis (the current is on the surface) of a 2.5-wavelength, 50-segment wire 10-3 wavelengths in radius modeled using NEC. For the antenna case (the solid line) the two 20-V/m source segments are obvious as are the other 48 match points (the solid circles) whose values are generally on the order of 10-13 or less. For the scattering problem, the scattered E-field (the dashed line) is graphically indistinguishable from the incident 1 V/m excitation except near the wire ends. The IEMF and far-field powers for the antenna are 1.257x10-2 w and 1.2547x10-2 w, respectively. For the scattering problem, the corresponding powers are 5.35x10-4 and 5.31x10-4 watts.
Two Novel Approaches to Antenna-Pattern Synthesis
The design of linear arrays that produce a desired radiation pattern, i.e. the pattern-synthesis problem, continues to be of interest as demonstrated by the number of articles that continue to be published on this topic. A wide variety of approaches have been developed to deal with this problem of which two are examined here. One of them, a matrix-based method, begins with a specified set of element currents for a chosen array geometry. A convenient choice, for example, is for all of the current elements to be of unit amplitude. Given its geometry and currents, an initial radiation pattern for the array can be computed. A matrix is constructed whose individual coefficients are comprised of the contribution each element current makes to the various lobe maxima of this initial radiation pattern. Upon forming the product of the inverse of this matrix with a vector whose entries are the desired amplitudes of each maxima in the radiation pattern to be synthesized, a second set of element currents is obtained. The lobe maxima of the pattern that this second set of element currents generates usually change somewhat in angle relative to those of the initial pattern while their amplitudes will also not match those specified. The process is repeated as an iterative sequence of element-current and pattern computations. When the locations of the lobe maxima no longer change in angle and their maxima converge to the values specified the synthesis is complete. Results from this approach are demonstrated for several patterns an example of which follows below.
The pattern of a 15-element array synthesized to have a pattern increasing monotonically in 5 dB steps from left to right.
The second approach is based on a pole-residue model for an array whose element locations (the poles) and currents (the residues) are developed from samples of the specified pattern. One way of solving for the poles and residues is provided by Prony’s Method, and another is the Matrix-Pencil procedure. However found, the spacing between the array elements derived using such tools can in general be non-uniform, a potential advantage in reducing the problem of grating lobes. There are three parameters that need to be chosen for the pattern sampling: 1) the number of poles in the initial array model, for each of which two pattern samples are required; 2) the spacing of the pattern samples themselves, being required to be in equal steps of , with the observation angle from the array axis; and 3) the total pattern window that is sampled. The pattern rank is an important parameter as it establishes the minimum number of elements that are needed for the array, and can be determined from the singular-value spectrum of the matrix developed from the pattern samples. The pole-residue approach is summarized and various examples its use are also demonstrated.
Some Computational “Tricks of the Trade”
Numerical computations have become ubiquitous in today’s world of science and engineering, not least of which is the area of what has come to be called computational electromagnetics. Students entering the electromagnetics discipline are expected to have developed a working acquaintance with a variety of numerical methods and models at least by the time they reach graduate studies, if not earlier in their undergraduate education. Most will have obtained some experience with the broader issues involved in numerically solving differential and integral equations. However, there are a variety of specialized numerical procedures that contribute to the implementation and use of numerical models that are less well known, and which form the basis for this lecture. Among such procedures are:
1) Acceleration techniques (e.g., Shank’s method, Richardson extrapolation, Kummer’s method) that enable estimating an answer for an infinite series or integral using many fewer terms;
2) Adaptive techniques such as one based on Romberg quadrature that permit more efficient numerical evaluation of integrals;
3) Model-based techniques that can reduce the number of samples needed to estimate a transfer function or radiation pattern.
4) “Backward” recursion that develop classical functions from noise together with an auxiliary condition.
5) Ramanujan’s modular-function formula for Pi whose accuracy increases quartically with each increase in the computation order.
This lecture will survey some of these procedures from the perspective of their applicability to computational electromagnetics. A specific example of an acceleration technique is the Leibniz series for pi, which provides N-digit accuracy after summing approximately 10N terms. Shank’s method, on the other hand provides N-digit accuracy after N terms of the Leibniz series as shown in the triangle array below.
3.466666666 3.133333333 3.142105263
2.895238095 3.145238095 3.141450216 3.141599357
3.339682539 3.139682539 3.141643324 3.141590860 3.141592714
2.976046176 3.142712843 3.141571290 3.141593231 3.141592637 3.141592654 3.283738484 3.140881349 3.141602742 3.141592438 3.141592659
3.017071817 3.142071817 3.141587321 3.141592743
3.252365935 3.141254824 3.141595655
A Personal Retrospective on 50 Years of Involvement in Computational Electromagnetics
This presentation briefly reviews some selected aspects of the evolution and application of the digital computer to electromagnetic modeling and simulation from a personal perspective. First considered are some of the major historical developments in computers and computation, and projections for future progress. Described next are some of the author’s personal experiences in computer modeling and electromagnetics resulting from his research activities in industry, government laboratories and academia dating from introduction of the IBM 7094. Some aspects of the impact that computer modeling has had more generally on the discipline of computational electromagnetics (CEM) is then summarized. Issues of potential interest not only to CEM but also to scientific computing in general are then briefly considered. These include: 1) verification and validation of modeling codes and their outputs; 2) the importance of statistics and visualization as computer models become larger and more complex; and 3) some issues related to first-principles or micro-modeling and reduced-order or macro-modeling of physical observables and their connection with signal processing. Various illustrative examples are shown to demonstrate some of these issues with the talk concluding with a few personal remarks.
The growth of computer speed in FLOPs/sec since 1953. The cross marks the time of the author’s first involvement in computational electromagnetics.
Evolution of the Digital Computer and Computational Electromagnetics
The development and evolution of the digital computer is a fascinating and still-unfolding story. This presentation briefly surveys this fascinating topic beginning with the origin of numbers and mathematics and concluding with present and anticipated future capabilities of computers in terms of their impact on computational electromagnetics (CEM). Numerous intellectual and technological breakthroughs over millennia have contributed to the current state-of-art. The development of arithmetic and computing might be said to begin with an ability to count. The first “recorded” number that has been found, a slash mark on the fibula of a baboon and apparently signifying the number 1 is about 20,000 years old. Counting and numbers eventually followed some 15,000 years later due to the Babylonians who invented the abacus as the first calculating tool at about the same time that the Egyptians introduced the first known symbols for numbers using a base-10 system. The number zero appeared about 500 CE in India with fractions and negative numbers coming a little later as did the Arabic number system, also credited to India.
It was about a thousand years later that the appearance of the first computational device beyond the abacus occurred, the “Pascaline” invented by Pascal for addition and subtraction in 1642. Leibniz added the capability for multiplication and division in 1671 with the “Leibniz wheel” which is still used in electromechanical calculators. Related computational developments include the invention by de Vaucanson of the punched wooden card for controlling a special loom that was later perfected by Jacquard. Babbage proposed his “difference” engine for calculating tables in 1822, followed in 1833 by his “analytical” engine, the first programmable computer. Punched cards were first used in a numerical setting by Hollerith for the 1890 US census with his company becoming part of IBM in 1911. The light bulb and the “Edison effect” began the electronics revolution with Fleming’s and De Forest’s work leading to the first triode in 1906.
The computer revolution began in earnest in the 1930s with a series of electromechanical computers due to Vennevar Bush at MIT, Konrad Zuse in Germany, and Howard Aiken with the IBM Mark 1. The latter was used during World War II for computing artillery-firing tables. The first all electric computer was built at Iowa State College by Atanasoff and Berry in 1939 and the first all electronic computer, Colossus, was developed in 1943-1944 in Britain for code breaking on a project headed by Alan Turing. A series of “AC” or “automatic computers” followed, such as ENIAC, EDSAC, ILLIAC I, ORDVAC, EDVAC and UNIVAC. The first transistor-based system, the IBM 7090, was introduced in 1959 with the IBM 7094 using ferrite-core memory following a few years later.
The onset of CEM can probably be said to date to the 1940’s using electromechanical calculators. These were still being used as late as 1960 at the Radiation Laboratory at the University of Michigan for computing tables of special functions and related quantities. The IBM 704 and 7094 soon took over this role that the author exploited in developing a computational thesis from 1963 to 1965. A special issue of the IEEE Proceedings in 1965 highlighted EM computations with the moment method popularized by the Harrington book in 1968. What were once considered “big” problems in the 1960’s using an integral-equation model having 200 or so unknowns have expanded to millions of unknowns now, even on personal computers, with an expanding set of tools and models. The growth rate in performance from the UNIVAC until now has been one of unparalleled progress, by a factor of 10 every 5 years. If this were to continue until 2053, computing speed will have grown to ~1023 FLOPS! ensuring a commensurate impact on CEM.
From the abacus to the IBM 7094 and beyond.
Since earning his PhD in Electrical Engineering at the University of Michigan, E. K. Miller has held a variety of government, academic and industrial positions. These include 15 years at Lawrence Livermore National Laboratory where he spent 7 years as a Division Leader, and 4+ years at Los Alamos National Laboratory from which he retired as a Group Leader in 1993. His academic experience includes holding a position as Regents-Distinguished Professor at Kansas University and as Stocker Visiting Professor at Ohio University. Dr. Miller wrote the column “PCs for AP and Other EM Reflections” for the AP-S Magazine from 1984 to 2000. He received (with others) a Certificate of Achievement from the IEEE Electromagnetic Compatibility Society for Contributions to Development of NEC (Numerical Electromagnetics Code) and was a recipient (with others) in 1989 of the best paper award given by the Education Society for “Computer Movies for Education.”
He served as Editor or Associate Editor of IEEE Potentials Magazine from 1985 to 2005 for which he wrote a regular column “On the Job,” and in connection with which he was a member of the IEEE Technical Activities Advisory Committee of the Education Activities Board and a member of the IEEE Student Activities Committee. He was a member of the 1992 Technical Program Committee (TPC) for the MTT Symposium in Albuquerque, NM, and Guest Editor of the Special Symposium Issue of the IEEE MTT Society Transactions for that meeting. In 1994 he served as a Guest Associate Editor of the Optical Society of America Journal special issue “On 3 Dimensional Electromagnetic Scattering.” He was involved in the beginning of the IEEE Magazine "Computing in Science and Engineering" (originally called Computational Science and Engineering) for which he has served as Area Editor or Editor-at-Large. Dr. Miller has lectured at numerous short courses in various venues, such as Applied Computational Electromagnetics Society (ACES), AP-S, MTT-S and local IEEE chapter/section meetings, and at NATO Lecture Series and Advanced Study Institutes.
Dr. Miller edited the book "Time-Domain Measurements in Electromagnetics", Van Nostrand Reinhold, New York, NY, 1986 and was co-editor of the IEEE Press book Computational Electromagnetics: Frequency-Domain Moment Methods, 1991. He was organizer and first President of the Applied Computational Electromagnetics Society (ACES) for which he also served two terms on the Board of Directors. He served a term as Chairman of Commission A of US URSI and is or has been a member of Commissions B, C, and F, has been on the TPC for the URSI Electromagnetic Theory Symposia in 1992 and 2001, and was elected as a member of the US delegation to several URSI General Assemblies. He is a Life Fellow of IEEE from which he received the IEEE Third Millennium Medal in 2000 and is a Fellow of ACES. His research interests include scientific visualization, model-based parameter estimation, the physics of electromagnetic radiation, validation of computational software, and numerical modeling about which he has published more than 150 articles and book chapters. He is listed in Who's Who in the West, Who's Who in Technology, American Men and Women of Science and Who's Who in America.
Dr. Sudhakar Rao
Technical Fellow, Engineering & Global Products Division
Northrop Grumman Aerospace Systems
1 Space Park Drive, Mail Stop: ST70AA/R9
Redondo Beach, CA 90278, USA
Advanced Antenna Systems for Satellite Communication Payloads Abstract
Recent developments in the areas of antenna systems for FSS, BSS, PCS, & MSS satellite communications will be discussed. System requirements that drive the antenna designs will be presented initially. Advanced antenna system designs for contoured beams, multiple beams, and reconfigurable beams will be presented. Shaped reflector antenna designs, multi-aperture reflector antennas for multiple beams, multi-band reflector antennas, reconfigurable antennas, phased array systems, and lens antennas will be discussed in detail. Design examples of direct broadcast satellites (DBS) covering national and local channels will be given. Topics such as antenna designs for high capacity satellites, large deployable mesh reflector designs, low PIM designs, and power handling issues will be included. High power test methods for the satellite payloads will be addressed. Future trends in the satellite antennas will be discussed. At the end of this talk, engineers will be exposed to typical requirements, designs, hardware, and test methods for various satellite antenna designs.
Feed Elements and Feed Assemblies for Space Applications
This talk presents various types of feed elements used for space applications. The first part of the talk discusses various feed elements suitable for space applications. These include feeds for reflectors, radars, phased arrays, global horns, and TT&C Omni-coverage feeds. Typical radiation patterns, scan performance, and design constraints will be presented along with hardware examples. The second part deals with the feed networks that go behind the radiating elements and includes OMTs, polarizers, filters/diplexers, combiners/dividers etc. Integrated design, analysis, and manufacture methods will be discussed. High power and PIM analysis and test methods will be discussed during the talk. Recent advances in the feed assembly design for low-loss and low cross-polar applications will be presented with examples. Feed elements suitable for array antennas and phased array systems for space applications will be discussed including practical examples.
Sudhakar K. Rao received B.Tech degree in electronics & communications from Jawaharlal Nehru Technological University, Warangal in 1974, M.Tech in Radar Systems Engineering from Indian Institute of Technology, Kharagpur in 1976, and Ph. D in Electrical Engineering from Indian Institute of Technology, Madras in 1980. During the period 1976-1977 he worked as a Technical officer at Electronics Corporation of India Limited, Hyderabad on large reflector antennas for LOS and TRPO microwave links, and during the period 1980-1981 he worked in the Electronics and Radar Development Establishment, Bangalore as a Senior Scientist and developed phased array antennas for airborne applications. He worked as a post-doctoral fellow at University of Trondheim, Norway during 1981-1982 and then as a research associate at University of Manitoba during 1982-1983. During1983-1996, he worked at Spar Aerospace Limited, Montreal, Canada, as Staff Scientist and developed advanced antennas for satellite communications. From 1996-2003 he worked as Chief Scientist/Technical Fellow at Boeing Satellite Systems and developed multiple beam antennas and reconfigurable beam payloads for commercial and military applications. During the period 2003-2010, he worked as a Corporate Senior Fellow at Lockheed Martin Space Systems and developed antenna payloads for fixed satellite, broadcast satellite, and personal communication satellite services. He is currently a Technical Fellow at Northrop Grumman Aerospace Systems, Redondo Beach, CA working on advanced antenna systems for space and aircraft applications. He authored over 160 technical papers and has 41 U.S patents. He co-edited three text book volumes on “Handbook of Reflector Antennas and Feed Systems” that are published in June 2013 by the Artech House.
Dr. Rao became an IEEE Fellow in 2006 and a Fellow of IETE in 2009. He received several awards and recognitions that include 2002 Boeing’s Special Invention Award for series of patents on satellite antenna payloads, 2003 Boeings’ technical achievement award, Lockheed Martin’s Inventor of Technology award in 2005 & 2007, IEEE Benjamin Franklin Key Award in 2006, Delaware Valley Engineer of the Year in 2008, and Asian American Engineer of the year award in 2008. He received IEEE Judith Resnik Technical Field Award in 2009 for pioneering work in aerospace engineering.
Dr. Karl F. Warnick
Professor, Department of Electrical and Computer Engineering
Brigham Young University
Provo, UT 84602
New IEEE Standard Terms and Figures of Merit for Active Antenna Arrays
Active multi-antenna systems and antenna arrays are of great interest currently for applications such as high-sensitivity astronomical aperture phased arrays and phased array feeds, multiple input multiple output (MIMO) communications systems, digitally beamformed arrays, steered beam antennas for passive remote sensing, and arrays for mobile, airborne, and maritime satellite communications. The standard definitions for gain, radiation efficiency, antenna efficiency, and noise temperature are directly applicable only to receiving antennas that can be operated as transmitters. For active receiving arrays with complex receiver chains, nonreciprocal components in the beamforming network, or digitally sampled and processed output signals, existing transmit-based antenna terms such as gain and radiation efficiency cannot be directly applied. Using the reciprocity principle to obtain an equivalence between the total power radiated by a transmitting antenna and the noise power at the output of a receiving antenna, a new set of figures of merit has been developed for active array receivers. These figures of merit have been formulated into a set of new antenna terms, including isotropic noise response, active antenna available gain, active antenna available power, receiving efficiency, and noise matching efficiency, and additions to the existing definitions for noise temperature of an antenna and effective area. The terms were reviewed by the IEEE Antenna Definitions Working Group and the IEEE Standards Association and are included in the recently published IEEE Std 145-2013, Standard for Definitions of Terms for Antennas. The last version of the standard was published 20 years ago, so this represents a major milestone for the worldwide antenna community. The presentation will explain the theoretical basis for the new antenna terms, show their equivalence to existing definitions in the passive case, and give example applications for which the figures of merit have impacted the development of new types of array antenna technologies.
Network Theory, Antenna Arrays, Noise, Mutual Coupling, and Array Signal Processing
Network theory provides a theoretical bridge between array antenna models and the techniques of array signal processing. The antenna community often takes a simplistic approach to array beamforming and processing algorithms that lags far behind the state-of-the-art in the signal processing community. Similarly, the signal processing community usually employ simplistic physical models and assumptions that do not accurately represent key electrical effects that occur in realistic multiport antenna systems. To bridge the gap between the antenna and signal processing communities, we present a network theory treatment of phased arrays and multiantenna systems that brings concepts such as mutual coupling, impedance matching, electronics noise, thermal noise, and antenna losses into a unified theoretical framework. In particular, the network point of view demystifies antenna noise and mutual coupling effects and provides a simple way to understand and work with the interactions between nearby elements in an antenna array. This theoretical framework provides a powerful set of modeling tools that can be used to design, optimize, and characterize antenna systems for multiple input multiple output (MIMO) communications systems, digitally beamformed arrays, and steered beam antennas for remote sensing and satellite communications.
Ultra-high Efficiency Planar Phased Arrays for Satellite Communications
Aperture phased arrays and phased array feeds (PAFs) are a promising technology for sensing and communications applications requiring electronic beamsteering and large signal collecting area, but current technologies are too costly and inefficient for widespread use in satellite communications. To meet strict efficiency and sensitivity requirements, existing satellite communications terminals typically use reflector antennas with horn feeds. Because the microwave sky is quite cool, small improvements in antenna efficiency lead to large gains in the key figure of merit for a satellite receiver, signal to noise ratio. Horn antennas inherently have a high radiation efficiency, and off-the-shelf low noise block downcoverter feeds (LNBFs) cost only a few dollars to manufacture, yet have been so carefully optimized that further improving signal quality would require cryogenic cooling. These considerations have motivated significant recent interest in research aimed at achieving low cost, high efficiency phased array feed receiver systems. To meet this combination of high performance requirements and low cost, we have used computational design optimization to develop efficient, low noise planar array feed antennas that can be fabricated using standard microwave PCB techniques. This presentation gives an overview of work on passive, fixed beam array feeds with linear and circular polarization, including the first demonstration of planar phased arrays with performance comparable to traditional horn antennas, and active beam steering feeds that adaptively track a signal source as the antenna moves. This research opens up new possibilities for phased arrays in terms of low cost, high efficiency, and performance for satellite communications applications.
Research Frontiers in Phased Array Antennas for Radio Astronomy
For nearly 75 years, the challenge of detecting extremely weak signals from deep space has been a driving force in antenna theory, receiver technology, and signal processing. The astronomical community is currently working to develop dense aperture phased arrays and phased array feeds, which offer a significantly larger field of view than conventional single-pixel telescopes and will enable new astronomical observations such as rapid sky surveys, radio transient searches, and tests of fundamental physics. Because sensitivity and stability requirements for radio telescopes far exceed those of other applications such as wireless communications, efforts to develop astronomical phased arrays have opened up new and exciting challenges for antenna design, microwave systems, and multichannel signal processing. Work at BYU and elsewhere over the last few years has uncovered many fundamental research questions. How should antenna gain and other figures of merit be defined for an active phased array? What impedance should array elements be designed for to maximize SNR? How does mutual coupling affect antenna performance? What is the best achievable efficiency with a phased array? Can phased arrays be as sensitive as a state-of-the-art horn antenna with liquid helium-cooled electronics? How can computational electromagnetics tools be combined with microwave network system models to optimize an entire system including a phased array antenna elements, receiver electronics, and signal processing? This presentation will highlight recent progress in these areas, including phased array antenna figures of merit, high efficiency antenna element design, active impedance matching, noise minimization for wideband arrays, phased array receiver characterization, measurement techniques, design optimization methods, array calibration, beamforming algorithms, polarimetric phased array antennas. Experimental results and hardware development supported by these theoretical advances will also be highlighted, including a digitally beamformed cryogenic phased array feed for the world’s largest fully steerable antenna, the Green Bank Telescope.
Karl F. Warnick (SM’04, F’13) received the B.S. degree (magna cum laude) with University Honors and the Ph.D. degree from Brigham Young University (BYU),
Dr. Warnick is a Fellow of the IEEE and is a recipient of a National Science Foundation Graduate Research Fellowship, Outstanding Faculty Member award for Electrical and Computer Engineering, the BYU Young Scholar Award, the Ira A. Fulton College of Engineering and Technology Excellence in Scholarship Award, and the BYU Karl G. Maeser Research and Creative Arts Award. He has served the Antennas and Propagation Society as a member and co-chair of the Education Committee and as Senior Associate Editor of the IEEE Transactions on Antennas and Propagation and Antennas. Dr. Warnick has been a member of the Technical Program Committee for the International Symposium on Antennas and Propagation for several years and served as Technical Program Co-Chair for the Symposium in 2007.
Prof. Andrea Alu
The University in Texas at Austin
Department of Electrical and Computer Engineering
201, Speedway ENS 431
Austin, TX 78712,
Metamaterials and Plasmonics to Tailor and Enhance Wave-Matter Interactions
Metamaterials and plasmonics offer unprecedented opportunities to tailor and enhance the interaction of waves with matter. In this lecture, I will discuss our recent progress and research activity in these research areas, showing how suitably tailored meta-atoms and combinations of them can open new venues to manipulate and control electromagnetic waves in unprecedented ways. I will discuss our recent theoretical and experimental results involving metamaterial and/or plasmonic nanostructures, including the concept of magnetic-based Fano resonances in nanoclusters, modularized optical nanocircuits, nanoantennas and metasurfaces to control light propagation and radiation, enhanced artificial magnetism and chirality in properly tailored metamaterials, parity-time symmetric metamaterials, giant nonlinearities and nonreciprocity using suitably designed meta-atoms. Physical insights into these exotic phenomena and their impact on technology and new electromagnetic devices will be discussed during the talk.
Cloaking and Invisibility Using Metamaterials and Metasurfaces
In this lecture, I will discuss our recent progress and research activity in using metamaterial covers to suitably tailoring the scattering of passive objects, drastically suppressing their overall detectability. I will focus on two approaches we have pioneered in the past years, the plasmonic cloaking and the mantle cloaking techniques, respectively based on bulk plasmonic metamaterials and ultrathin metasurfaces. I will show the theoretical concepts at the basis of these approaches and our experimental results at radio-waves, which represent the first experimental verification of cloaking for 3D free-standing objects. I will also discuss advanced concepts, such as the ultimate bounds on realizing ‘invisible sensors’, the general bounds and potentials of cloaking and invisibility on bandwidth and overall scattering reduction, and ways to overcome these limitations using active, non-Foster cloaks.
Homogenization of Electromagnetic Metamaterials
The proper modeling and homogenization of metamaterials is a crucial task to be able to apply them in practical devices and technology. This lecture will provide an overview and introduction to the theoretical aspects and challenges in the homogenization of metamaterials. After outlining the popular approaches to this problem, I will discuss the challenges and difficulties that metamaterials introduce in their rigorous electromagnetic homogenization. I will then review the recent advances in ‘homogenization theory’ introduced in my group in the past few years, and highlight the advantages of this approach with numerical and practical examples. The relevant issues of causality and passivity of the effective parameters of metamaterials will be discussed in detail and applied to practical electromagnetic problems of general interest.
Giant Non-Reciprocity / Non-Linearity Using Metamaterials
In this lecture, I will describe our recent theoretical and experimental advances in boosting the nonreciprocal and nonlinear response of subwavelength meta-molecules and arrays of them, applied to radio-waves, light and/or sound. I will first introduce the general concept of angular-momentum biased metamaterials, which support the analog to the Zeeman effect in ferromagnetic moulecules, but without relying on any magnetic effect. I will show that it is possible to induce large non-reciprocal response at the subwavelength scale by splitting the degenerate modes supported by a resonant meta-molecule applying an angular momentum bias, in the form of circulating media or azimuthally-symmetric spatiotemporal modulation. In this way, I will discuss how large nonreciprocal effects may be obtained in fully integrated designs that do not require magnetic bias, experimentally demonstrated for sound and radio-waves, and concepts to extend these effects also to infrared and nanophotonic systems. Within the same thrust, I will also discuss our recent theoretical and experimental progress in boosting the naturally weak non-linear response using metamaterials. We have recently pursued two promising venues in this direction: the use of extreme parameter metamaterials and the suitable engineering of combined electronic and photonic transitions in suitably designed metasurfaces. These concepts are shown to produce orders of magnitude enhancement in the efficiency of various non-linear optical processes, including second-harmonic generation, phase conjugation and frequency mixing, also relaxing the need for phase matching.
Andrea Alù is an Associate Professor and David & Doris Lybarger Endowed Faculty Fellow in Engineering at the University of Texas at Austin. He received the Laurea, MS and PhD degrees from the University of Roma Tre, Rome, Italy, respectively in 2001, 2003 and 2007. From 2002 to 2008, he has been periodically working at the University of Pennsylvania, Philadelphia, PA, where he has also developed significant parts of his PhD and postgraduate research. After spending one year as a postdoctoral research fellow at UPenn, in 2009 he joined the faculty of the University of Texas at Austin. He is also a member of the Applied Research Laboratories and of the Wireless Networking and Communications Group at UT Austin.
He is the co-author of an edited book on optical antennas, over 230 journal papers, 400 conference papers and over 20 book chapters. His current research interests span over a broad range of areas, including metamaterials and plasmonics, electromangetics, optics and photonics, scattering, cloaking and transparency, nanocircuits and nanostructures modeling, miniaturized antennas and nanoantennas, RF antennas and circuits.
Dr. Alù is currently on the Editorial Board of Scientific Reports and Advanced Optical Materials, he serves as Associate Editor of five journals, including the IEEE Antennas and Wireless Propagation Letters and of Optics Express. In the past few years he has guest edited special issues for the IEEE Journal of Selected Topics in Quantum Electronics, for Optics Communications, for Metamaterials and for Sensors on a variety of topics involving metamaterials, plasmonics, optics and electromagnetic theory. He has received several awards for his research activity, including the OSA Adolph Lomb Medal (2013), the IUPAP Young Scientist Prize in Optics (2013), the IEEE MTT Outstanding Young Engineer Award (2014), the Franco Strazzabosco Award for Young Engineers (2013), the URSI Issac Koga Gold Medal (2011), the SPIE Early Career Investigator Award (2012), an NSF CAREER award (2010), the AFOSR and the DTRA Young Investigator Awards (2010, 2011), Young Scientist Awards from URSI General Assembly (2005) and URSI Commission B (2010, 2007 and 2004). His students have also received several awards, including student paper awards at IEEE Antennas and Propagation Symposia and at the Metamaterials conference series. He has been elected an APS Outstanding Referee in 2013, he serves as OSA Traveling Lecturer since 2010 and as the IEEE joint AP-S and MTT-S chapter for Central Texas since 2011, and is a full member of URSI, a fellow of IEEE and of OSA and a senior member of SPIE.
Prof. Jianming Jin
Y.T. lo Chair Professor, University of Illinois at Urbana Champaign, Urbana, IL, USA
Jian-Ming Jin is Y. T. Lo Chair Professor in Electrical and Computer Engineering and Director of the Electromagnetics Laboratory and Center for Computational Electromagnetics at the University of Illinois at Urbana-Champaign. He has authored and co-authored over 240 papers in refereed journals and over 22 book chapters. He has also authored The Finite Element Method in Electromagnetics (Wiley, 1st ed. 1993, 2nd ed. 2002, 3rd ed. 2014), Electromagnetic Analysis and Design in Magnetic Resonance Imaging (CRC, 1998), Theory and Computation of Electromagnetic Fields (Wiley, 2010), co-authored Computation of Special Functions (Wiley, 1996), Finite Element Analysis of Antennas and Arrays (Wiley, 2008), and Fast and Efficient Algorithms in Computational Electromagnetics (Artech, 2001). His name appeared over 20 times in the University of Illinois’s List of Excellent Instructors. He was elected by ISI as one of the world’s most cited authors in 2002. Dr. Jin has been a Fellow of IEEE since 2000, received the IEEE AP-S Chen To Tai Distinguished Educator Award in 2015, and was a recipient of the 1994 NSF Young Investigator Award and the 1995 ONR Young Investigator Award. He also received the 1997 Xerox Junior and the 2000 Xerox Senior Research Awards from the University of Illinois, and was appointed as the first Henry Magnuski Outstanding Young Scholar in 1998 and later as a Sony Scholar in 2005. He was appointed as a Distinguished Visiting Professor in the Air Force Research Laboratory in 1999. He received Valued Service Award and Technical Achievement Award from the Applied Computational Electromagnetics Society in 1999 and 2014, respectively.
The Fascinating World of Computational Electromagnetics
As an art and science for solving Maxwell’s equations, computational electromagnetics is a fascinating area for research and engineering application. Over the past five decades, computational electromagnetics has evolved into the most important field in the general area of electromagnetics. The importance of computational electromagnetics is due to the predictive power of Maxwell’s theory – Maxwell’s theory can predict design performances or experimental outcome if Maxwell’s equations are solved correctly. Moreover, Maxwell’s theory, which governs the basic principles behind electricity, is extremely pertinent in many engineering and scientific technologies such as radar, microwave and RF engineering, remote sensing, geoelectromagnetics, bioelectromagnetics, antennas, wireless communication, optics, and high-frequency circuits. Furthermore, Maxwell’s theory is valid over a broad range of frequencies spanning static to optics, and over a wide range of length scales, from subatomic to inter-galactic. Because of this, computational electromagnetics is a very important subject which has already impacted and will continue to impact many engineering and scientific technologies. In this presentation, we will review the past progress and current status of computational electromagnetics, and discuss its future challenges and research directions. We will first give an overview of computational electromagnetics methods and then use a variety of examples to demonstrate their applications.
Note: This talk is aimed at senior undergraduate and beginning graduate students.
Domain Decomposition for Finite Element Analysis of Large-Scale Electromagnetic Problems
Numerical discretization of large-scale electromagnetic problems often results in a large system of linear equations involving millions or even billions of unknowns, whose solution is very challenging even with the most powerful computers available today. In this presentation, we will discuss domain decomposition methods for finite element analysis of such large-scale electromagnetic problems. We will begin with a review of the basic ideas of the Schwarz and Schur complement domain decomposition methods, which include the alternating and additive overlapping Schwarz methods, the nonoverlapping optimized Schwarz method, and the primal, dual, and dual-primal Schur complement domain decomposition methods. We will then present three most robust and powerful nonoverlapping domain decomposition methods for solving Maxwell’s equations. The first is the dual-primal finite element tearing and interconnect (FETI-DP) method based on one Lagrange multiplier for static, quasistatic, and low-frequency electromagnetic problems. The second is the FETI-DP method based on two Lagrange multipliers for more challenging high-frequency electromagnetic problems. The third one is the optimized Schwarz method based on higher-order transmission conditions. We will discuss the relationship between the three methods and their advantages and disadvantages, and present many highly challenging problems to demonstrate the power and capabilities of the domain decomposition methods.
From the Finite Element Method to discontinuous Galerkin Time-Domain Method for Computational Electromagnetics
The past two decades have witnessed rapid development of the finite element time-domain (FETD) method for electromagnetic analysis. Today, the method has become one of the most powerful numerical techniques for simulating electromagnetic transient phenomena, performing broadband RF and microwave characterization, and modeling nonlinear electromagnetic devices. In this presentation, we will review the progress in the development of the FETD method for solving Maxwell’s equations mostly during the past ten years. If time permits, we will discuss FETD formulations, FETD analysis at very low frequencies, modeling of electrically and magnetically dispersive media, mesh truncation using perfectly matched layers and time-domain boundary integral equations, time-domain simulation of periodic structures with the Floquet absorbing boundary condition, time-domain waveguide port boundary conditions, Huygens-based domain decomposition algorithm, explicit FETD algorithms, and hybrid field-circuit simulation based on the FETD method. The second half of the presentation will be devoted to the discontinuous Galerkin time-domain (DGTD) method, which includes the motivation for its development, its relation to the FETD and finite volume time-domain (FVTD) methods, its formulation based on central and upwind fluxes, and its performance comparison with the explicit FETD methods. Throughout the presentation, we will present a variety of numerical examples to illustrate the importance and application of the topics discussed.
MultiPhysics Modeling in Computational Electromagnetics: Challenges and Opportunities
As computational methods for solving Maxwell's equations become mature, the time has come to tackle much more challenging multiphysics problems, which have a great range of applications in sciences and technologies. In this presentation, we will use five examples to illustrate the nature and modeling of multiphysics problems. The first example is related to electromagnetic hyperthermia, which requires solving electromagnetic and bio-heat transfer equation for the planning and optimization of the treatment process. The second concerns the heat problem in integrated circuits due to electromagnetic dissipated power, which requires an electrical-thermal co-simulation. The third example considers modeling of monolithic microwave integrated circuits, which consist of both distributive and lumped circuit components. The fourth is the simulation of vacuum electronic devices using the particle-in-cell method, which solves Maxwell's equations and particle kinetic equation, and the last example simulates the air and dielectric breakdown in high-power microwave devices by coupling electromagnetic modeling with various plasma models. With these examples, we will discuss the methodologies and some of the challenges in multiphysics modeling.
Prof. Andrea Massa
ELEctromagnetic DIAgnostics Research Center
DISI ‐ Università di Trento
Digiteo Chair@Laboratoire des Signaux et Systèmes
UMR8506 (CNRS‐CENTRALE SUPELEC‐UNIV. PARIS SUD)
www.l2s.centralesupelec.fr/content l2s. /eledia/eledial2s‐group
Prof. Massa received the “laurea” degree in Electronic Engineering from the University of Genoa, Genoa, Italy, in 1992 and Ph.D. degree in EECS from the same university in 1996. From 1997 to 1999, he was an Assistant Professor of Electromagnetic Fields at the Department of Biophysical and Electronic Engineering (University of Genoa). From 2001 to 2004, he was an Associate Professor at the University of Trento. Since 2005, he has been a Full Professor of Electromagnetic Fields at the University of Trento, where he currently teaches electromagnetic fields, inverse scattering techniques, antennas and wireless communications, wireless services and devices, and optimization techniques.
At present, Prof. Massa is the director of the ELEDIA Research Center with a staff of more than 30 researchers located in the headquarter at the University of Trento and in the offshore labs (ELEDIA@L2S within the L2S‐CentraleSupélec (Paris), ELEDIA@UniNAGA at the University of Nagasaki). Moreover, he is Adjunct Professor at Penn State University (USA) and holder of a Senior DIGITEO Chair developed in co‐operation between the Laboratoire des Signaux et Systèmes in Gif‐sur‐Yvette and the Department "Imagerie et Simulation for the Contrôle" of CEA LIST in Saclay (France) from December 2014, and he has been Visiting Professor at the Missouri University of Science and Technology (USA), the Nagasaki University (Japan), the University of Paris Sud (France), the Kumamoto University (Japan), and the National University of Singapore (Singapore).
Prof. Massa serves as Associate Editor of the "IEEE Transaction on Antennas and Propagation" and Associate Editor of the "International Journal of Microwave and Wireless Technologies" and he is member of the Editorial Board of the "Journal of Electromagnetic Waves and Applications", a permanent member of the "PIERS Technical Committee” and of the “EuMW Technical Committee”, and a ESoA member. He has been appointed in the Scientific Board of the "Società Italiana di Elettromagnetismo (SIEm)" and elected in the Scientific Board of the Interuniversity National Center for Telecommunications (CNIT). Recently Prof. Massa has been appointed by the National Agency for the Evaluation of the University System and National Research (ANVUR) as a member of the Recognized Expert Evaluation Group (Area 09, 'Industrial and Information Engineering') for the evaluation of the researches at the Italian University and Research Center. Moreover, he has been appointed as the Italian Member of the Management Committee of the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
His research activities are mainly concerned with direct and inverse scattering problems, propagation in complex and random media, analysis/synthesis of antenna systems and large arrays, design/applications of WSNs, cross‐layer optimization and planning of wireless/RF systems, semantic wireless technologies, material‐by‐design (metamaterials and reconfigurable‐materials), and theory/applications of optimization techniques to engineering problems (telecommunications, medicine, and biology).
Prof. Massa published more than 500 scientific publications among which about 270 on international journals and more than 270 in international conferences where he presented more than 50 invited contributions. He has organized 45 scientific sessions in international conferences and has participated to several technological projects in the European framework (20 EU Projects) as well as at the national and local level with national agencies (75 Projects/Grants).
Inverse Problems in Electromagnetics ‐ Challenges and New Frontiers
Inverse problems arise when formulating and addressing many synthesis and sensing applications in modern electromagnetic engineering. Indeed, the objective of antenna design, microwave imaging, and radar remote sensing can be seen as that of retrieving a physical quantity (the shape of the radiating system, the dielectric profile of a device under test, the reflectivity of an area) starting from (either measured or “desired”) electromagnetic field data. Nevertheless, the solution of the well‐known theoretical features (including ill‐posedness, non‐uniqueness, ill‐conditioning, etc.) of electromagnetic inverse problems still represents a major challenge from the practical viewpoint. Indeed, developing and implementing robust, fast, effective, and general‐purpose techniques able to solve arbitrary electromagnetic inverse problem still represent a holy grail from the academic and industrial viewpoint. Accordingly, several ad‐hoc solutions (i.e., effective only for specific application domains) have been developed in the recent years.
In this framework, one of the most important research frontiers is the development of inversion techniques which enable the exploitation of both the information coming from the electromagnetic data and of that which is provided by prior knowledge of the scenario, application, or device of interest. Indeed, exploiting a‐priori information to regularize the problem formulation is known to be a key asset to reduce the drawbacks of inversion processes (i.e., the its ill‐posedness). However, properly introducing prior knowledge within an inversion technique is an extremely complex task, and suitable solutions are available only for specific classes of scenarios (e.g., comprising sparseness regularization terms).The aim of this talk is to provide a broad review of the current trends and objectives in the development of innovative inversion methodologies and algorithms. Towards this end, after a review of the literature on the topic, different classes of methodologies aimed at combining prior and acquired information (possibly in an iterative fashion) will be discussed, and guidelines on how to apply the arising strategies to different domains will be provided, along with numerical/experimental results. The open challenges and future trends of the research in this area will be discussed as well.
Evolutionary Optimization for Next Generation Electromagnetic Engineering
In the last decades, thanks to the growing computational capabilities, optimization techniques based on evolutionary algorithms (EAs) have received great attention and they have been successfully applied to a wide number of problems in engineering and science. As a matter of fact, EAs have shown many attractive features suitable for dealing with large, complex, and nonlinear problems. More specifically, they are hill‐climbing algorithms which not require the differentiation of the cost function, which is a “must” for gradient‐based methods. Moreover, a‐priori information can be easily introduced, usually in terms of additional constraints on the actual solution, and they can directly deal with real values as well as with a coded representation of the unknowns (e.g., binary coding). As regards to the architecture of their implementation, EAs can be effectively hybridized with deterministic procedures and are suitable for parallel computing.
Despite several positive advantages, many times EAs are used as "black‐box" tools without an adequate knowledge of their peculiarities and functionalities. Unfortunately, neglecting the features and properties of each EA can be extremely dangerous, as it is theoretically predicted by the "No free lunch theorem". Indeed, such a theorem states that any optimization methodology works on the average as a "random search", if applied to all optimization problems. Accordingly, the knowledge of the specific class of optimization problem to be handled is mandatory in order to choose and configure the correct EA, and thus to avoid sub‐optimal solutions/performance.
In this talk, a review of EA‐based approaches for electromagnetic engineering is presented. Starting from the theoretical framework of EAs and the state‐of‐the‐art techniques, some meaningful examples of EA‐based approaches for electromagnetics are reported to show the capabilities, but also current limitations, of such techniques. Finally, some indications on future trends of EA‐based techniques are envisaged.
Unconventional Array Design ‐ Fundamental and Advances
Antenna arrays are a key‐technology in several Electromagnetics applicative scenarios, including satellite and ground wireless communications, MIMO systems, remote sensing, biomedical imaging, radar, and radio‐astronomy. Because of their wide range of application, the large number of degrees of freedom at hand (e.g., type, position, and excitation of each radiating element), the available architectures (fully populated, thinned, clustered, etc.), and the possible objectives (maximum directivity, minimum sidelobes, maximum beam efficiency, etc.), the synthesis of arrays turns out to be a complex task which cannot be tackled by a single methodology.
Despite this wide heterogeneity, most of the synthesis approaches share a common theoretical framework which is of paramount importance for all engineers and students interested in such a topic. Moreover, this is also true for innovative methodologies aimed at the design of "unconventional arrays" (i.e., based sparse, thinned, conformal, clustered, overlapped, interleaved architectures, both in the frequency and in the time domain), which are currently receiving a great attention from the academic and industrial viewpoint.
The objective of the talk is therefore firstly to provide the attendees the fundamentals of Antenna Array synthesis, starting from intuitive explanations to rigorous mathematical and methodological insights about their behavior and design. Recent synthesis methodologies aimed at "unconventional architectures" (i.e., architectures close to the real‐applications and operative non‐ideal constraints/guidelines) will be then discussed in detail, with particular emphasis on innovative layouts for very large arrays.
Compressive Sensing – Basics, State of the Art, and Advances in Electromagnetic Engineering
The widely known Shannon/Nyquist theorem relates the number of samples required to reliably retrieve a "signal" to its (spatial and temporal) bandwidth. This fundamental criterion yields to both theoretical and experimental constraints in several Electromagnetic Engineering applications. Indeed, there is a relation between the number of measurements/data (complexity of the acquisition/ processing), the degrees of freedom of the field/signal (temporal/spatial bandwidth), and the retrievable information regarding the phenomena at hand (e.g., dielectric features of an unknown object, presence/position of damages in an array, location of an unknown incoming signal).
The new paradigm of Compressive Sensing (CS) is enabling to completely revisit these concepts by distinguishing the "informative content" of signals from their bandwidth. Indeed, CS theory asserts that one can recover certain signal/phenomena exactly from far fewer measurements than it is indicated by Nyquist sampling rate. To achieve this goal, CS relies on the fact that many natural phenomena are sparse (i.e., they can be represented by few non‐zero coefficients in suitable expansion bases), and on the use of aperiodic sampling strategies, which can guarantee, under suitable conditions, a perfect recovery of the information content of the signal.
Despite its recent introduction, the application of CS methodologies Electromagnetics has already enabled several innovative design/synthesis methodologies and retrieval/diagnosis methods to be developed.
In this framework, this talk is aimed at reviewing the fundamentals of the CS paradigm, specifically focusing on the applicability conditions, requirements, and guidelines for EM applications. Moreover, it is aimed at illustrating the state‐of‐the‐art and the most recent advances in Electromagnetic Engineering (including application of CS to antenna synthesis and diagnosis, direction‐of‐arrival estimation, inverse scattering, and radar imaging), as well as at envisaging possible future research trends and challenges within CS as applied to Electromagnetics.