Ann Antenna

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 55, NO.

3, MARCH 2007 669

Application of Artificial Neural Networks


to Broadband Antenna Design Based on a
Parametric Frequency Model
Youngwook Kim, Student Member, IEEE, Sean Keely, Joydeep Ghosh, Fellow, IEEE, and Hao Ling, Fellow, IEEE

Abstract—An artificial neural network (ANN) is proposed to Thus, the neural network approach has been explored in the de-
predict the input impedance of a broadband antenna as a function sign of microwave components and circuits such as microstrip
of its geometric parameters. The input resistance of the antenna
is first parameterized by a Gaussian model, and the ANN is
lines [5], spiral inductors [6], HEMT [7], filters [8], and mixers
constructed to approximate the nonlinear relationship between [9]. In the antenna community, ANN has been applied to beam-
the antenna geometry and the model parameters. Introducing forming [10] and direction-finding [11] for arrays, as well as to
the model simplifies the ANN and decreases the training time. microstrip antenna design [12]. However, the use of ANN for
The reactance of the antenna is then constructed by the Hilbert
transform from the resistance found by the neuromodel. A hybrid very broadband antennas with multiple resonances has not been
gradient descent and particle swarm optimization method is used extensively researched yet.
to train the neural network. As an example, an ANN is constructed Typically, when the ANN is used for antenna design,
for a loop antenna with three tuning arms. The antenna structure the antenna geometry parameters and the frequency are re-
is then optimized for broadband operation via a genetic algorithm
that uses input impedance estimates provided by the trained ANN garded as inputs to the ANN, while the output is the antenna
in place of brute-force electromagnetic computations. It is found input impedance. This approach has been very successful for
that the required number of electromagnetic computations in narrow-band antenna design. However, when the ANN is used
training the ANN is ten times lower than that needed during the in this manner in the broadband case, the number of hidden
antenna optimization process, resulting in significant time savings.
units will increase drastically as the number of oscillations in
Index Terms—Artificial neural network, broadband antenna, the impedance versus frequency graph increases. Increasing the
Gaussian model, genetic algorithm, Hilbert transform, particle
number of hidden units requires longer training time. Further-
swarm optimization.
more, it can lead to a high chance of reaching a local minimum,
resulting in unsuccessful training. Recently Lebber et al. re-
I. INTRODUCTION ported an ANN implementation to predict the antenna gain,
bandwidth, and polarization for a broadband patch antenna
HE design of broadband antennas is a computationally in- [13]. However, the method does not calculate the impedance
T tensive task, especially when a frequency-domain electro-
magnetic (EM) simulator is used. Moreover, when an optimiza-
variations over a wide frequency band. This approach cannot
obtain quantities such as number of resonances.
tion method such as a genetic algorithm [1] is used in the design In this paper, we indirectly use a neural network for predicting
process, the antenna characteristics must be computed for thou- the input impedance of a broadband antenna via a parametric
sands of hypothetical antennas over a broadband of frequencies frequency model. The input resistance of the antenna is first pa-
in order to evaluate the relative merit of each configuration. rameterized by a Gaussian model [14]. The Gaussian parame-
In order to substitute the computationally intensive EM simu- ters are then estimated for the different training antennas, and
lation, artificial neural networks (ANNs) [2], [3] have been sug- a neural network is trained to describe the relationship between
gested as attractive alternatives [4]. An ANN can be suitable the antenna geometry and the Gaussian parameters, as shown in
for modeling high-dimensional and highly nonlinear problems. Fig. 1. By introducing the parametric model, the resulting ANN
When properly trained with reliable learning data, a neuromodel operates in a much less complex solution space. This leads to a
is computationally more efficient than an exact EM simulator, smaller network size, faster training time, and more robust con-
and more accurate than a model based on approximate physics. vergence of the training process. For the training method, a hy-
brid scheme combining the gradient descent method and a par-
ticle swarm optimization [15] is utilized. Once the network for
Manuscript received February 28, 2006; revised October 13, 2006. This work
was supported by the Texas Higher Education Coordinating Board under the the input resistance is in place, the input reactance is generated
Texas Advanced Technology Program and the National Science Foundation by the Hilbert transform [16]. This proposed technique is valid
Major Research Instrumentation Program.
Y. Kim, J. Ghosh, and H. Ling are with the Department of Electrical and
when the band of interest is broad and the resonant frequencies
Computer Engineering, University of Texas at Austin, Austin, TX 78712 USA of the antenna are distinct.
(e-mail: [email protected]). The resulting neural model is next exploited for antenna op-
S. Keely is with the Department of Physics, University of Texas at Austin,
Austin, TX 78712 USA (e-mail: [email protected]). timization. In this paper, we use the loop-based broadband an-
Digital Object Identifier 10.1109/TAP.2007.891564 tenna structure reported in [17] as an example. The antenna has
0018-926X/$25.00 © 2007 IEEE

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.
670 IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 55, NO. 3, MARCH 2007

seven geometric parameters: the lengths and heights of its three


rectangular tuning arms and the radius of the antenna wire. The
antenna structure is optimized for broadband operation via a ge-
netic algorithm (GA) that uses the input impedance predicted by
the ANN over a broad frequency range and over the range of an-
tenna geometries being considered by the GA. The performance
of the ANN in terms of accuracy and computational savings is
evaluated in this application against a brute-force electromag-
netic computation.
This paper is organized as follows. Section II presents the
Gaussian model and its parameter estimation. In Section III, the
structure of the neural network is described, and the training Fig. 1. Impedance prediction network.
method and their results based on the example broadband an-
tenna are discussed. Section IV presents the optimization of
the antenna using the resulting neural network. Conclusions are
given in Section V.

II. GAUSSIAN-BASED FREQUENCY MODEL FOR


INPUT RESISTANCE
The input impedance of a broadband antenna usually contains
multiple resonances within the band of interest. A direct approx-
imation of this characteristic by a neural network may lead to a
Fig. 2. Antenna shape and parameters.
large number of hidden units and is prone to failure. Further-
more, the drastic change in reactance at the resonant frequency
can be difficult for the ANN to learn. In order to simplify the
problem, we embed a suitable physical principle into the net- III. ARTIFICIAL NEURAL NET STRUCTURE
work so as to constrain the solution space. An artificial neural network is next constructed to model the
We choose to model the resistance by a sum of Gaussians. complex relationship between the antenna geometry and the
The Gaussian model is simple and relatively insensitive to pa- Gaussian model parameters. For modeling the antenna geom-
rameter errors. Furthermore, modeling only the resistance be- etry, the multilayered perceptron (MLP) is utilized. The MLP is
havior leads to a reduced network size, improved training time, a known universal approximator and has been extensively used
and better chance of successful training. Once the broadband in microwave applications [20]. The suggested network system
resistance is modeled, the reactance can be recovered via the is illustrated in Fig. 1.
Hilbert transform. A Gaussian model to approximate the fre- A broadband antenna for automobiles, reported earlier in
quency dependent resistance envelope of a symmetric resonator [17], is considered as an example. It is a loop structure with
can be represented as three tuning arms as presented in Fig. 2. The structure has seven
geometric parameter variables: the lengths and heights of its
three rectangular tuning arms and the radius of the antenna wire.
(1) The frequency range of interest is in the ultra-high-frequency
(UHF) band from 170 to 650 MHz. The MLP takes the seven
geometric parameters as inputs and produces all of the means,
Here, is the impedance function; , and are coef- variances, and amplitudes of the Gaussian model as outputs.
ficients of the model; and is a bias. The number of modeled Gaussians is set to six, giving 19 free
This Gaussian expansion is naturally encoded as a radial basis parameters to specify the frequency dependency including the
function (RBF) with one input and one output [18]. The coeffi- bias.
cients are searched by the gradient descent method, introducing The MLP consists of an input layer, a hidden layer, and an
one Gaussian at a time in a procedure similar to the resource output layer. The hyperbolic tangent is employed as an activa-
allocation network of Platt [19]. It can be shown that, using tion function, and a linear output layer is used. Bias is added to
this method, the Gaussian will, at every update, move into an the input and the hidden layer. Two hundred fifty hidden units
approximation of the previous training step’s Gaussian-target and 19 output units are used, where the 19 outputs represent
product. This is exploited to let each basis function settle into the mean, variance, and amplitude for each of the six Gaus-
an approximation of a single resonance by ensuring that the ini- sians, plus a single number indicating the bias amount. The total
tial width of the Gaussian is large and subtracting from the target number of weights in the net is 6769. The normalized range of
curve each already placed Gaussian. This method consistently inputs to the ANN is from 0 to 50, and that of output is from 0
yields good results with a minimal number of Gaussians. to 500.

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.
KIM et al.: APPLICATION OF ANNS TO BROADBAND ANTENNA DESIGN 671

The constructed MLP is trained by three strategies: i) gradient


descent, ii) particle swarm optimization (PSO), and iii) hybrid
gradient descent and PSO. For network training, a data set of
270 antenna configurations is generated and the corresponding
Gaussian parameters are estimated. The numerical electromag-
netic code (NEC) is used for the EM simulation. From the data
set, 135 samples are selected as training data and the remaining
135 are used as validation data. All of the training data are input
into the ANN one after the other, and the cumulative averaged
root mean square (rms) error of the output is regarded as the cost
function.
First, we apply the gradient descent by error back propaga-
tion (EBP) to train the ANN. EBP propagates error backwards
through the network to allow the error derivatives for all weights
to be efficiently computed [21]. When the training is performed,
the rms errors of both the training and validation processes de-
crease with increasing iterations. In the parameter space, the av-
eraged rms error of the training approaches 33.7 and that of the
validation approaches 44.8 after 5000 epochs.
One potential drawback of the gradient descent is that it is
a local search method, and its performance can be strongly
affected by the initial guess. The PSO algorithm has been tried
for training neural networks with good reported performance
for simple networks [22], [23]. Here we implement a PSO to
train the 6769 weights in the net. One hundred particles are
introduced, and they are iterated 150 times. To limit the search
space for the parameters to a physically possible range, the
damping wall is employed [24]. The PSO is initiated with
random numbers and training is performed. The averaged
rms error of training approaches 132.1, and that of validation Fig. 3. The error from PSO with the initial guess from gradient descent, (a) the
approaches 134.2 in the parameter space. Clearly, the PSO averaged rms error of parameters and (b) the averaged %rms error of resistance.
performs poorly in comparison to the gradient descent. We
believe this is due to the very huge parameter space (6769) in
our problem. IV. BROADBAND ANTENNA OPTIMIZATION USING ANN
To improve the training with the PSO, we also try using the
results of the gradient descent to initialize the PSO. Gradient The performance of the trained ANN is evaluated through
descent already finds a relatively good solution, so the PSO is an antenna optimization process. A GA is used to optimize the
expected to find a better answer near the gradient descent solu- considered broadband antenna structure. In the process of the
GA, the antenna impedances are generated by the trained ANN
tion in the complex cost surface, which may contain many local
rather than by an EM simulator, as depicted in Fig. 5. The resis-
minima. The evaluated cost of the PSO with the gradient descent
tance is calculated using the trained neural network, and the re-
as initial guess starts at 33.7. However, in order to show how the
actance is derived from the Hilbert transform. The three lengths
particles move close to the given solution, the second best cost
and three widths of the tuning arms and the wire radius are opti-
is plotted in Fig. 3 until the PSO finds a better solution than the
mized within a 50 by 50 cm area. The cost function of the GA is
gradient descent. After defeating the gradient descent result, the defined as the average voltage standing-wave ratio (VSWR) in
best cost is selected for the plot. The final averaged rms error of the frequency range from 170 to 220 and from 470 to 650 MHz
training is 32.4, and that of validation is 43.5, which are lower to cover UHF analog television and digital video broadcasting.
than the errors from the gradient descent. Shown in Fig. 3(b) is Each generation of the GA consists of 100 chromosomes, and
the %rms error of the input resistance as constructed from the the replacement rate and the mutation rate are 70% and 5%,
Gaussian model. The final %rms error of training is 16.4%, and respectively [17].
that of validation is 19.1%. The broadband antenna is optimized after 31 iterations. The
Fig. 4(a) and (b) shows, respectively, a sample from the best cost function in the GA process using the trained ANN is
training data set and a sample from the validation data set. The 1.6. The heights and widths of the side arms of the optimized
dashed curves are predicted by the ANN, and the solid curves antenna are 35.4 by 12.0 cm , 28.4 by 5.6 cm , and 12.6 by
are the true resistance calculated by NEC. It can be observed 12.8 cm, and the wire radius is 0.49 mm. The impedance of
that the resistance from the neural net matches fairly well with the resulting antenna from the ANN is plotted against the exact
the true value. impedance calculated by NEC for the same optimized geometry

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.
672 IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 55, NO. 3, MARCH 2007

Fig. 6. Resistance and reactance of the optimized antenna.

Fig. 4. Prediction of resistance by the ANN: (a) %rms error = 13:14% and
(b) %rms error = 24:16%. Fig. 7. VSWR of the optimized antenna.

using brute-force calculations by NEC for all the cost function


evaluations. The GA converges after 29 iterations. The best cost
in the optimization process is 1.64. Due to the difference in the
exact NEC calculation and the ANN prediction, the GA this
time converges to a slightly higher optimized cost and a different
optimized antenna configuration. In Fig. 7, we plot the VSWR
of this optimized antenna configuration as the dotted curve. We
observe that the performance of the “GA with NEC” antenna is
comparable to that of the “GA with ANN” antenna.
Note that during the GA optimization using the brute-force
approach, the NEC simulation must be carried out for all 2900
different antenna geometries. Using the developed neural net-
work, however, NEC is employed only 270 times for the gener-
Fig. 5. Genetic algorithm with the ANN. ation of the training and validation data sets. This is a 10.7-fold
reduction in the number of EM calculations as compared to the
brute-force method.
in Fig. 6. The ANN result agrees fairly well with the NEC calcu- As another example, we optimize the antenna again using
lation. Their corresponding VSWR curves are plotted in Fig. 7. the ANN in a different frequency band from 320 to 650 MHz.
The dashed curve is the “GA with ANN” result and the solid The averaged VSWR of the final converged design is 1.66 as
curve is the true VSWR of the optimized design as calculated predicted by the ANN and 1.72 as calculated by NEC. The GA
by NEC. The averaged VSWR as computed by NEC is 1.63 in optimization is also done via brute force using NEC for all the
the band of interest (the unshaded regions in the plot). EM calculations. The averaged VSWR of the final converged
In order to gauge the performance of the developed ANN, design is 1.62. In this case, the reduction in the number of EM
the considered antenna is optimized again by the GA, this time calculations is found to be 11.8.

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.
KIM et al.: APPLICATION OF ANNS TO BROADBAND ANTENNA DESIGN 673

V. CONCLUSION [11] S. Jha and T. Durrani, “Direction of arrival estimation using artifi-
cial neural networks,” IEEE Trans. Syst., Man, Cybern., vol. 21, pp.
1192–1201, Sep. 1991.
In this paper, an ANN-based system has been proposed to [12] R. K. Mishra and A. Patnaik, “Designing rectangular patch antenna
predict the input impedance of a broadband antenna. The input using the neurospectral method,” IEEE Trans. Antennas Propag., vol.
51, pp. 1914–1921, Aug. 2003.
resistance of the antenna was first parameterized by a Gaussian [13] S. Lebbar, Z. Guennoun, M. Drissi, and F. Riouch, “Compact and
model over a broad band of frequencies and the ANN was then broadband microstrip antenna design using a geometrical-method-
constructed to approximate the nonlinear relationship between ology-based artificial neural network,” IEEE Antennas Propag. Mag.,
vol. 48, pp. 146–154, Apr. 2007.
the antenna geometry and the model parameters. Introducing [14] D. Xu, L. Yang, and Z. He, “Overcomplete time delay estimation using
the model simplified the construction and training of the ANN, multi-Gaussian fitting method,” in IEEE Int. Workshop VLSI Design
resulting in robust performance. The neural network was Video Tech., May 2005, pp. 248–251.
[15] J. Robinson and Y. Rahmat-Samii, “Particle swarm optimization
trained by using particle swarm optimization as a local search
in electromagnetics,” IEEE Trans. Antennas Propag., vol. 52, pp.
procedure seeded with an initial guess from the gradient descent 397–407, Feb. 2004.
learning. The reactance of the antenna was then constructed by [16] A. E. Gera, “Linking resistance and reactance via Hilbert transforms,”
the Hilbert transform. To test the performance of the resulting Proc. 17th Conv. Electr. Electron. Eng. Israel, pp. 141–144, Mar. 1991.
[17] Y. Kim, Y. Noh, and H. Ling, “Design of ultra-broadband on-glass
ANN, a loop antenna with multiple tuning arms was optimized antenna with a 250 system impedance for automobiles,” Electron.
by a GA, whereby the developed ANN system was used for Lett., vol. 40, pp. 1566–1568, Dec. 2004.
the cost function evaluations. The performance of the ANN [18] P. J. Radonja, “Radial basis function neural networks: In tracking and
extraction of stochastic process in forestry,” Proc. 5th Seminar Neural
was compared with that of a direct approach, in which the cost Network Appl. Electr. Eng. (NEUREL 2000), pp. 81–86, Sep. 2000.
function evaluation was done using the EM simulator. It was [19] J. Platt, “A resource allocating network for function interpolation,”
found that the ANN approach led to a tenfold reduction in Neural Computat., vol. 3, pp. 213–225, 1991.
[20] A. Patnaik and R. K. Mishra, “ANN techniques in microwave engi-
the number of required EM simulations and was still able to neering,” IEEE Micro, vol. 1, pp. 55–60, Mar. 2000.
maintain an acceptable level of accuracy. This indicates that a [21] S. Makram-Ebeid, J.-A. Sirat, and J.-R. Viala, “A rationalized error
parametric frequency model used in conjunction with an ANN back-propagation learning algorithm,” in Proc. Int. Joint Conf. Neural
Netw., Jun. 1989, vol. 2, pp. 373–380.
forms an effective framework for the design and evaluation of [22] V. G. Gudise and G. K. Venayagamoorthy, “Comparison of particle
very broadband antennas. While the Gaussian model is found swarm optimization and backpropagation as training algorithms for
to perform adequately, other frequency models such as the neural networks,” Proc. IEEE Swarm Intell. Symp., pp. 110–117, Apr.
2003.
rational function model may lead to even better performance.
[23] E. A. Grimaldi, F. Grimacca, M. Mussetta, and R. E. Zich, “PSO as an
This topic is currently under investigation. effective learning algorithm for neural network applications,” in Proc.
Int. Conf. Computat. Electromagn. Appl., Nov. 2004, pp. 557–560.
[24] T. Huang and A. S. Mohan, “A hybrid boundary condition for robust
particle swarm optimization,” IEEE Antennas Wireless Propag. Lett.,
vol. 4, pp. 112–117, 2005.
REFERENCES

[1] R. L. Haupt, “An introduction to genetic algorithms for electromag-


netics,” IEEE Antennas Propag. Mag., vol. 37, pp. 7–15, Apr. 1995.
[2] C. Bishop, Neural Networks for Pattern Recognition. Oxford, U.K.:
Oxford Univ. Press, 1995.
[3] Q. Zhang, K. C. Gupta, and V. K. Devabhaktuni, “Artificial neural net-
works for RF and microwave design—From theory to practice,” IEEE
Trans. Microwave Theory Tech., vol. 51, pp. 1339–1350, Apr. 2003. Youngwook Kim (S’01) was born in Seoul, Korea,
[4] A. Patnaik, D. E. Anagnostou, R. Mishra, C. G. Christodoulou, and J. C. in 1976. He received the B.S. degree in electrical
Lyke, “Applications of neural networks in wireless communications,” engineering from Seoul National University in 2003
and the M.S. degree in electrical and computer
IEEE Antennas Propag. Mag., vol. 46, pp. 130–137, Jun. 2004.
engineering from the University of Texas at Austin
[5] J. Li and Z. Bao, “Neural network models for the coupling micro-
in 2005, where he is currently pursuing the Ph.D.
strip line design,” in Proc. IEEE Int. Conf. Syst., Oct. 2003, vol. 5, pp. degree.
4916–4921. His research interests include developments of fast
[6] A. Ilumoka and Y. Park, “Neural network-based modeling and de- optimization algorithm for antenna design, equiva-
sign of on-chip spiral inductors,” Proc. 36th Southeastern Symp. Syst. lent circuit modeling of broadband antennas, DOA
Theory, pp. 561–564, 2004. estimation, and through-walls human tracking.
[7] S. Lee, B. A. Cetiner, H. Torpi, S. J. Cai, J. Li, K. Alt, Y. L. Chen, C.
P. Wen, K. L. Wang, and T. Itoh, “An X-band GaN HEMT power am-
plifier design using an artificial neural network modeling technique,”
IEEE Trans. Electron Devices, vol. 48, pp. 495–501, Mar. 2001. Sean Keely received the B.Sc. degree in physics from
[8] A. S. Ciminski, “Artificial neural networks modeling for computer- the University of Nevada, Reno, in 2003. He is cur-
aided design of microwave filter,” in Proc. 14th Int. Conf. Microw., rently pursuing the M.Sc. degree in physics from the
Radar, Wireless Commun., May 2002, vol. 1, pp. 95–99. University of Texas, Austin.
[9] J. Xu, M. C. E. Yagoub, R. Ding, and Q. Zhang, “Neural-based dynamic His interests include applications of short pulse
modeling of nonlinear microwave circuits,” IEEE Trans. Microwave lasers, numerical simulation, and the use of approx-
Theory Tech., vol. 50, pp. 2769–2780, Dec. 2002. imate operators in simulations.
[10] Z. He and Y. Chen, “Robust blind beamforming using neural network,”
Proc. Inst. Elect. Eng. Radar, Sonar, Navig., vol. 147, pp. 41–46, Feb.
2000.

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.
674 IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 55, NO. 3, MARCH 2007

Joydeep Ghosh (S’87–M’88–SM’02–F’06) re- Hao Ling (S’83–M’86–SM’92–F’99) was born in


ceived the B.Tech. degree from IIT, Kanpur, in 1983 Taichung, Taiwan, R.O.C., on September 26, 1959.
and the Ph.D. degree from the University of Southern He received the B.S. degree in electrical engineering
California, Los Angeles, in 1988. and physics from the Massachusetts Institute of
He is currently the Schlumberger Centennial Chair Technology, Cambridge, in 1982 and the M.S. and
Professor of Electrical and Computer Engineering Ph.D. degrees in electrical engineering from the
at the University of Texas (UT), Austin. He teaches University of Illinois at Urbana-Champaign in 1983
graduate courses on data mining, artificial neural net- and 1986, respectively.
works, and Web analytics. He joined the UT-Austin He joined the Faculty of the University of Texas
Faculty in 1988. He is the Founder-Director of the at Austin in 1986, where he is currently a Professor
Intelligent Data Exploration and Analysis Lab. His of electrical and computer engineering and holder of
research interests lie primarily in intelligent data analysis, data mining and web the L. B. Meaders Professorship in Engineering. During 1982, he was with the
mining, adaptive multilearner systems, and their applications to a wide variety IBM T. J. Watson Research Center, Yorktown Heights, NY, where he conducted
of complex engineering and artificial intelligence problems. He has published low-temperature experiments in the Josephson Department. He participated in
more than 200 refereed papers and 30 book chapters, and has coedited 18 the Summer Visiting Faculty Program in 1987 at the Lawrence Livermore Na-
books. His research has been supported by the NSF, Yahoo!, Google, ONR, tional Laboratory, Livermore, CA. In 1990, he was an Air Force Summer Fellow
ARO, AFOSR, Intel, IBM, Motorola, TRW, Schlumberger, and Dell, among with Rome Air Development Center, Hanscom Air Force Base, MA. His prin-
others. He has been a plenary/keynote speaker on several occasions such cipal area of research is in computational electromagnetics. During the past two
as ANNIE’06, MCS 2002, and ANNIE’97. He also serves on the Program decades, he has actively contributed to the development and validation of numer-
Committee of several top conferences on data mining, neural networks, pattern ical and asymptotic methods for characterizing the radar cross section from com-
recognition, and Web analytics every year. He has widely lectured on intelligent plex targets. His recent research interests also include radar signal processing,
analysis of large-scale data. He has been a Co-organizer of workshops on antenna design, and propagation channel modeling.
high-dimensional clustering (ICDM 2003; SDM 2005), Web analytics (SIAM Dr. Ling was a recipient of the National Science Foundation Presidential
International Conference on Data Mining—SDM2002), Web mining (SDM Young Investigator Award in 1987 and the NASA Certificate of Appreciation
2001), and parallel and distributed knowledge discovery (KDD-2000). He in 1991, as well as several teaching awards from the University of Texas.
has been a Consultant or Advisor to a variety of companies, from successful
startups such as Neonyoyo and Knowledge Discovery One to large corporations
such as IBM, Motorola, and Vinson & Elkins.
Dr. Ghosh received the 2005 Best Research Paper Award from the UT Co-op
Society and the 1992 Darlington Award given by the IEEE Circuits and Sys-
tems Society for the best paper in the areas of CAS/CAD, as well as nine other
best paper awards over the years. He was Conference Cochair of Computational
Intelligence and Data Mining (CIDM’07), Program Cochair for the SIAM In-
ternational Conference on Data Mining (SDM’06), and Conference Cochair for
Artificial Neural Networks in Engineering (ANNIE) from 1993 to 1996 and
1999 to 2003. He is Founding Chair of the Data Mining Technical Committee
of the IEEE CI Society. He was voted the Best Professor by the Software Engi-
neering Executive Education Class of 2004.

Authorized licensed use limited to: University of Texas at Austin. Downloaded on March 25, 2009 at 15:14 from IEEE Xplore. Restrictions apply.

You might also like