... Aerodynamic Shape Optimization: Methods and Applications. Date Published: 1999-10-19. Paper N... more ... Aerodynamic Shape Optimization: Methods and Applications. Date Published: 1999-10-19. Paper Number: 1999-01-5500. DOI: 10.4271/1999-01-5500. Author(s): Oktay Baysal - Old Dominion Univ. Abstract. The motivation is to ...
Presented in this paper is a computational investigation of subsonic and transonic flows past thr... more Presented in this paper is a computational investigation of subsonic and transonic flows past three-dimensional deep and transitional cavities. Simulations of these self-induced oscillatory flows have been generated through time-accurate solutions of the Reynolds averaged, full Navier-Stokes equations, using the explicit MacCormack scheme. The Reynolds stresses have been included through the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties. The results of an experimental investigation have been used not only to validate the time-averaged results, but also to investigate the effects of varying the Mach number and the incoming boundary-layer thickness. Time series analyses have been performed for the instantaneous pressure values on the cavity floor and compared with the results obtained by a predictive formula. While most of the comparisons have been favorable, some discrepancies have been observed, particularly on the rear face. The present results help understanding the three-dimensional and unsteady features of the separations, vortices, the shear layer, as well as some of the aeroacoustic phenomena of compressible cavity flows.
Ankara : The Institute of Economics and Social Sciences of Bilkent Univ., 1997.Thesis (Master'... more Ankara : The Institute of Economics and Social Sciences of Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical refences.Strategy training is suggested as an effective way of promoting language learning by many researchers. Among the various types of suggested strategy training models, narrow-focus and broad-focus strategy training are two models that maximize the learning potential of students. This is particularly true for reading comprehension which is considered as one of the most important skills in learning a language (Carrell, 1988). In this study, two hypotheses were put forth. The first hypothesis was that both narrow-focus and broad-focus strategy training are effective in promoting reading comprehension of EFL students. The second hypothesis was that broad-focus strategy training is more effective than narrow-focus strategy training in promoting reading comprehension of EFL students. Three intact groups were used in this study, thus, a quasi experimental design was adopted. The groups were selected from the same language proficiency level, upper-intermediate. There were three groups: two experimental groups and one control group. All three groups had six 50-minute treatment sessions. The first experimental group was trained with a narrow-focus strategy training model. In the second experimental group a broad-focus strategy training model was used. The control group did not receive any strategy training, in other words, they continued their regular reading classes. The researcher had no control over the choice of the groups in the experiment. A total of 48 EFL upper-intermediate level students at Osmangazi University participated in this study, with 19 subjects in the first experimental group, 17 in the second experimental group and 12 in the control group. The data for this study were collected by means of pre- and post-tests and a reading strategy inventory. The reading strategies to be trained in the treatment sessions were selected through an analysis of the reading strategy inventory (SILL). In the first experimental group (narrow-focus group), the selected reading strategies were trained individually within each reading passage within a given time period. In the second experimental group (broad-focus group), the three reading strategies were trained in an integrative manner within a reading passage in a given time period. The subjects in the control group read the passage, found out the meanings of new vocabulary and answered the related comprehension questions. In the data analysis, the mean scores and standard deviations of both pre-and post-tests, for each group, were calculated. For the pre-test, a one-way ANOVA was used to determine whether the level of proficiency in reading comprehension among the groups was equal. After the pre-test, a reading strategy inventory was administered to elicit strategy use among the subjects. Frequency distributions and percentages for each item in the inventory were calculated. After the treatment a post-test was administered to all groups. A t-test was used to determine whether there was a significant difference between pre- and post-tests within each group. Then, a one-way ANOVA was used among the three groups to determine whether there was a significant difference. Later, a t-test was applied across groups to determine which of the three groups significantly improved their reading comprehension skills. Following the post-test, a reading strategy inventory was administered a second time to elicit responses regarding whether the subjects in each group made use of the strategies trained in the treatment sessions. Finally, the frequency distributions from the first and second administration of the inventory were compared to note difference in use of strategies reported. Data analysis showed that after the training, improvement in the reading comprehension test scores of experimental group 1 (narrow-focus) was not significant. Thus, the hypothesis that stated narrow-focus strategy training is effective in promoting reading comprehension was rejected. On the other hand, there was a significant improvement in the reading comprehension scores of the second experimental group (broad-focus) at the level p<.001. Thus, the second hypothesis, that stated broad-focus strategy training is more effective than narrow-focus strategy training was accepted.Baysal, OktayM.S
This paper presents a framework for automated optimization of double-heater convective PCR (DH-cP... more This paper presents a framework for automated optimization of double-heater convective PCR (DH-cPCR) devices by developing a computational fluid dynamics (CFD) simulation database and artificial neural network (ANN) model. The optimization parameter space that includes the capillary tube geometries and the heater sizes of DH-cPCR is established, and a database consisting of nearly 10,000 CFD simulations is constructed. The database is then used to train a two-stage ANN models that select practically relevant data for modeling and predict PCR device performance. The trained ANN model is then combined with the gradient-based and the heuristics optimization approaches to search for optimal device configuration that possesses the shortest DNA doubling time. The entire design process including model meshing and configuration, parallel CFD computation, database organization, and ANN training and utilization is fully automated. Case studies confirm that the proposed framework can successfully find the optimal device configuration with an error of less than 0.3 s, and hence, representing a cost-effective and rapid solution of DH-cPCR device design.
International Journal of Computational Fluid Dynamics, Jul 1, 2008
A synthetic jet is considered to control microflows, where the Knudsen number is between 0.001 an... more A synthetic jet is considered to control microflows, where the Knudsen number is between 0.001 and 0.1. The flow is modelled with the compressible, 2D Navier-Stokes equations. The wall boundary conditions are modified for the slip velocity and the temperature jump encountered for this Knudsen number range. The membrane motion is modelled as a moving boundary. After a validation using experimental results available only for a macroflow over a hump, the present study focuses on developing a design optimisation methodology for micro-synthetic jets in micro-scale, laminar crossflow. First, single-variable optimisations are performed. As compared to the baseline case, the optimisations yield 2, 15, 15 and 200% increase in actuation efficiency for the cases varying the orifice width, the orifice height, the cavity height and the frequency, respectively. Then, multi-variable shape optimisation is performed. Compared to the baseline case, the optimisation using shape parameters results in a 7-fold increase in the actuation efficiency, while the optimisation with Bezier polynomials results in more than a 10-fold increase.
A recently developed three-dimensional aerodynamic shape optimization procedure A e s o P 3~ i s ... more A recently developed three-dimensional aerodynamic shape optimization procedure A e s o P 3~ i s described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D CFD methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated i n the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1.5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
... See papers presented at International Truck & Bus Meeting & Exposition, December 2000... more ... See papers presented at International Truck & Bus Meeting & Exposition, December 2000, Portland, OR, USA, Session: Aerodynamics and Fluid Flow. Purchase more technical papers and save! With TechSelect, you decide what SAE Technical Papers you need, when you ...
American Society of Mechanical Engineers eBooks, 1995
Presents papers from the November 1995 congress demonstrating the utilization of CFD in a design ... more Presents papers from the November 1995 congress demonstrating the utilization of CFD in a design environment. Topics include pre- and post-optimization sensitivity analyses; discrete and variational sensitivity methods; stochastic and genetic algorithms; shape optimization; inverse methods; trade-of
International Journal of Remote Sensing, Aug 2, 2018
To meet the high frame rate requirements of correct point correspondences with a sub-pixel precis... more To meet the high frame rate requirements of correct point correspondences with a sub-pixel precision, this paper first proposes a Field Programmable Gate Array (FPGA) architecture that consists of corner detection, corner matching, outlier rejection, and sub-pixel precision localisation. In the architecture, a combined Features from Accelerated Segment Test (FAST)+ Binary Robust Independent Elementary Features (BRIEF) algorithm is adopted for detection and matching with pixel precision, a combined algorithm of Slope-based Rejection (SR) and Correlation-Coefficient-based Rejection (CCR) is used to reject the outliers, and a gradient centroid-based algorithm is used for subpixel precision localisation. The whole FPGA architecture is implemented on a single FPGA platform (Xilinx XC72K325T). Five image datasets with different spatial resolutions, textures, lights, rotate angles, and viewpoints are used to evaluate the performance of the FPGA-based implementation. The experimental results show that (1) the SR and CCR algorithms are effective for outlier rejection; (2) a higher correct matching rate is achieved for the image pairs that cover artificial textures than for those that cover natural textures; (3) the proposed architecture is also suitable for image pairs with small change of lights, rotate angles, and viewpoints; (4) the speed of the FPGA-based implementation can reach 280 Frames Per Second (FPS), which is 35 times faster than the Personal Computer (PC)-based implementation; and (5) the usage of FPGA resources is acceptable for the selected FPGA platform. The speed and usage of the FPGA resources can be improved when the whole FPGA-based implementation is further optimised.
The failure rate of electronic equipment depends on the operating temperature. Although demand fo... more The failure rate of electronic equipment depends on the operating temperature. Although demand for more effective cooling of electronic devices has increased in the last decades because of the microminiaturization in device sizes accompanied by higher power dissipation levels, there is still a challenge for engineers to attain improved reliability of thermal management for intermediate and low-heat-flux systems. In the present study, an innovative alternative method is proposed and a computational parametric study has been conducted. A single microchip is placed in a two-dimensional channel. Different synthetic jet configurations are designed as actuators in order to investigate their effectiveness for thermal management. The effect is that the actuator enhances mixing by imparting momentum to the channel flow, thus manipulating the temperature field in a positive manner. The best control is achieved when the actuator is placed midway on the chip length and increasing the throat height. Also, using nozzle-like throat geometry increases the heat transfer rate from the microchip surface. Doubling the number of the actuators, optimally placing them, and phasing their membrane oscillations all improve the cooling.
A new optimization algorithm called multi-frequency vibrational genetic algorithm (mVGA) is signi... more A new optimization algorithm called multi-frequency vibrational genetic algorithm (mVGA) is significantly improved and tested for two different test cases: an inverse design of an airfoil in subsonic flow and a direct shape optimization of an airfoil in transonic flow. The algorithm emphasizes a new mutation application strategy and diversity variety, such as, the global random diversity and the local controlled diversity. The local controlled diversity is based on either a fuzzy logic controller or an artificial neural network depending on the problem type. For both of the demonstration problems considered, remarkable reductions in the computational times have been accomplished.
... Aerodynamic Shape Optimization: Methods and Applications. Date Published: 1999-10-19. Paper N... more ... Aerodynamic Shape Optimization: Methods and Applications. Date Published: 1999-10-19. Paper Number: 1999-01-5500. DOI: 10.4271/1999-01-5500. Author(s): Oktay Baysal - Old Dominion Univ. Abstract. The motivation is to ...
Presented in this paper is a computational investigation of subsonic and transonic flows past thr... more Presented in this paper is a computational investigation of subsonic and transonic flows past three-dimensional deep and transitional cavities. Simulations of these self-induced oscillatory flows have been generated through time-accurate solutions of the Reynolds averaged, full Navier-Stokes equations, using the explicit MacCormack scheme. The Reynolds stresses have been included through the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties. The results of an experimental investigation have been used not only to validate the time-averaged results, but also to investigate the effects of varying the Mach number and the incoming boundary-layer thickness. Time series analyses have been performed for the instantaneous pressure values on the cavity floor and compared with the results obtained by a predictive formula. While most of the comparisons have been favorable, some discrepancies have been observed, particularly on the rear face. The present results help understanding the three-dimensional and unsteady features of the separations, vortices, the shear layer, as well as some of the aeroacoustic phenomena of compressible cavity flows.
Ankara : The Institute of Economics and Social Sciences of Bilkent Univ., 1997.Thesis (Master'... more Ankara : The Institute of Economics and Social Sciences of Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical refences.Strategy training is suggested as an effective way of promoting language learning by many researchers. Among the various types of suggested strategy training models, narrow-focus and broad-focus strategy training are two models that maximize the learning potential of students. This is particularly true for reading comprehension which is considered as one of the most important skills in learning a language (Carrell, 1988). In this study, two hypotheses were put forth. The first hypothesis was that both narrow-focus and broad-focus strategy training are effective in promoting reading comprehension of EFL students. The second hypothesis was that broad-focus strategy training is more effective than narrow-focus strategy training in promoting reading comprehension of EFL students. Three intact groups were used in this study, thus, a quasi experimental design was adopted. The groups were selected from the same language proficiency level, upper-intermediate. There were three groups: two experimental groups and one control group. All three groups had six 50-minute treatment sessions. The first experimental group was trained with a narrow-focus strategy training model. In the second experimental group a broad-focus strategy training model was used. The control group did not receive any strategy training, in other words, they continued their regular reading classes. The researcher had no control over the choice of the groups in the experiment. A total of 48 EFL upper-intermediate level students at Osmangazi University participated in this study, with 19 subjects in the first experimental group, 17 in the second experimental group and 12 in the control group. The data for this study were collected by means of pre- and post-tests and a reading strategy inventory. The reading strategies to be trained in the treatment sessions were selected through an analysis of the reading strategy inventory (SILL). In the first experimental group (narrow-focus group), the selected reading strategies were trained individually within each reading passage within a given time period. In the second experimental group (broad-focus group), the three reading strategies were trained in an integrative manner within a reading passage in a given time period. The subjects in the control group read the passage, found out the meanings of new vocabulary and answered the related comprehension questions. In the data analysis, the mean scores and standard deviations of both pre-and post-tests, for each group, were calculated. For the pre-test, a one-way ANOVA was used to determine whether the level of proficiency in reading comprehension among the groups was equal. After the pre-test, a reading strategy inventory was administered to elicit strategy use among the subjects. Frequency distributions and percentages for each item in the inventory were calculated. After the treatment a post-test was administered to all groups. A t-test was used to determine whether there was a significant difference between pre- and post-tests within each group. Then, a one-way ANOVA was used among the three groups to determine whether there was a significant difference. Later, a t-test was applied across groups to determine which of the three groups significantly improved their reading comprehension skills. Following the post-test, a reading strategy inventory was administered a second time to elicit responses regarding whether the subjects in each group made use of the strategies trained in the treatment sessions. Finally, the frequency distributions from the first and second administration of the inventory were compared to note difference in use of strategies reported. Data analysis showed that after the training, improvement in the reading comprehension test scores of experimental group 1 (narrow-focus) was not significant. Thus, the hypothesis that stated narrow-focus strategy training is effective in promoting reading comprehension was rejected. On the other hand, there was a significant improvement in the reading comprehension scores of the second experimental group (broad-focus) at the level p<.001. Thus, the second hypothesis, that stated broad-focus strategy training is more effective than narrow-focus strategy training was accepted.Baysal, OktayM.S
This paper presents a framework for automated optimization of double-heater convective PCR (DH-cP... more This paper presents a framework for automated optimization of double-heater convective PCR (DH-cPCR) devices by developing a computational fluid dynamics (CFD) simulation database and artificial neural network (ANN) model. The optimization parameter space that includes the capillary tube geometries and the heater sizes of DH-cPCR is established, and a database consisting of nearly 10,000 CFD simulations is constructed. The database is then used to train a two-stage ANN models that select practically relevant data for modeling and predict PCR device performance. The trained ANN model is then combined with the gradient-based and the heuristics optimization approaches to search for optimal device configuration that possesses the shortest DNA doubling time. The entire design process including model meshing and configuration, parallel CFD computation, database organization, and ANN training and utilization is fully automated. Case studies confirm that the proposed framework can successfully find the optimal device configuration with an error of less than 0.3 s, and hence, representing a cost-effective and rapid solution of DH-cPCR device design.
International Journal of Computational Fluid Dynamics, Jul 1, 2008
A synthetic jet is considered to control microflows, where the Knudsen number is between 0.001 an... more A synthetic jet is considered to control microflows, where the Knudsen number is between 0.001 and 0.1. The flow is modelled with the compressible, 2D Navier-Stokes equations. The wall boundary conditions are modified for the slip velocity and the temperature jump encountered for this Knudsen number range. The membrane motion is modelled as a moving boundary. After a validation using experimental results available only for a macroflow over a hump, the present study focuses on developing a design optimisation methodology for micro-synthetic jets in micro-scale, laminar crossflow. First, single-variable optimisations are performed. As compared to the baseline case, the optimisations yield 2, 15, 15 and 200% increase in actuation efficiency for the cases varying the orifice width, the orifice height, the cavity height and the frequency, respectively. Then, multi-variable shape optimisation is performed. Compared to the baseline case, the optimisation using shape parameters results in a 7-fold increase in the actuation efficiency, while the optimisation with Bezier polynomials results in more than a 10-fold increase.
A recently developed three-dimensional aerodynamic shape optimization procedure A e s o P 3~ i s ... more A recently developed three-dimensional aerodynamic shape optimization procedure A e s o P 3~ i s described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D CFD methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated i n the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1.5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
... See papers presented at International Truck & Bus Meeting & Exposition, December 2000... more ... See papers presented at International Truck & Bus Meeting & Exposition, December 2000, Portland, OR, USA, Session: Aerodynamics and Fluid Flow. Purchase more technical papers and save! With TechSelect, you decide what SAE Technical Papers you need, when you ...
American Society of Mechanical Engineers eBooks, 1995
Presents papers from the November 1995 congress demonstrating the utilization of CFD in a design ... more Presents papers from the November 1995 congress demonstrating the utilization of CFD in a design environment. Topics include pre- and post-optimization sensitivity analyses; discrete and variational sensitivity methods; stochastic and genetic algorithms; shape optimization; inverse methods; trade-of
International Journal of Remote Sensing, Aug 2, 2018
To meet the high frame rate requirements of correct point correspondences with a sub-pixel precis... more To meet the high frame rate requirements of correct point correspondences with a sub-pixel precision, this paper first proposes a Field Programmable Gate Array (FPGA) architecture that consists of corner detection, corner matching, outlier rejection, and sub-pixel precision localisation. In the architecture, a combined Features from Accelerated Segment Test (FAST)+ Binary Robust Independent Elementary Features (BRIEF) algorithm is adopted for detection and matching with pixel precision, a combined algorithm of Slope-based Rejection (SR) and Correlation-Coefficient-based Rejection (CCR) is used to reject the outliers, and a gradient centroid-based algorithm is used for subpixel precision localisation. The whole FPGA architecture is implemented on a single FPGA platform (Xilinx XC72K325T). Five image datasets with different spatial resolutions, textures, lights, rotate angles, and viewpoints are used to evaluate the performance of the FPGA-based implementation. The experimental results show that (1) the SR and CCR algorithms are effective for outlier rejection; (2) a higher correct matching rate is achieved for the image pairs that cover artificial textures than for those that cover natural textures; (3) the proposed architecture is also suitable for image pairs with small change of lights, rotate angles, and viewpoints; (4) the speed of the FPGA-based implementation can reach 280 Frames Per Second (FPS), which is 35 times faster than the Personal Computer (PC)-based implementation; and (5) the usage of FPGA resources is acceptable for the selected FPGA platform. The speed and usage of the FPGA resources can be improved when the whole FPGA-based implementation is further optimised.
The failure rate of electronic equipment depends on the operating temperature. Although demand fo... more The failure rate of electronic equipment depends on the operating temperature. Although demand for more effective cooling of electronic devices has increased in the last decades because of the microminiaturization in device sizes accompanied by higher power dissipation levels, there is still a challenge for engineers to attain improved reliability of thermal management for intermediate and low-heat-flux systems. In the present study, an innovative alternative method is proposed and a computational parametric study has been conducted. A single microchip is placed in a two-dimensional channel. Different synthetic jet configurations are designed as actuators in order to investigate their effectiveness for thermal management. The effect is that the actuator enhances mixing by imparting momentum to the channel flow, thus manipulating the temperature field in a positive manner. The best control is achieved when the actuator is placed midway on the chip length and increasing the throat height. Also, using nozzle-like throat geometry increases the heat transfer rate from the microchip surface. Doubling the number of the actuators, optimally placing them, and phasing their membrane oscillations all improve the cooling.
A new optimization algorithm called multi-frequency vibrational genetic algorithm (mVGA) is signi... more A new optimization algorithm called multi-frequency vibrational genetic algorithm (mVGA) is significantly improved and tested for two different test cases: an inverse design of an airfoil in subsonic flow and a direct shape optimization of an airfoil in transonic flow. The algorithm emphasizes a new mutation application strategy and diversity variety, such as, the global random diversity and the local controlled diversity. The local controlled diversity is based on either a fuzzy logic controller or an artificial neural network depending on the problem type. For both of the demonstration problems considered, remarkable reductions in the computational times have been accomplished.
Uploads
Papers by Oktay Baysal