GP-based techniques for the design of virtual sensors
|Head||Prof. (FH) Priv.-Doz. DI Dr. Michael Affenzeller|
|Researchers||Prof. (FH) DI Dr. Stefan Wagner |
Prof.(FH) DI Dr. Stephan Winkler
|Duration||2006 - 2009|
|Research focus||Software technology and application|
|Research institutions||University of Applied Sciences, Upper Austria, Hagenberg Campus |
Research Center Hagenberg
Research Group Heuristic and Evolutionary Algorithms Laboratory
|Project description|| |
Virtual sensors are a key element in many modern control and diagnosis systems, and their importance is continuously increasing. To design virtual sensors, models are needed: if no first principles model is available to the required precision, virtual sensor design must be based on data. In most such cases, universal approximators (like artificial neural networks) are commonly used, but their limits however, are well known. These limits are intrinsically connected with the fact that universal estimators essentially organize the information but do not detect underlying patterns. This is actually a classical task for data mining. The success of data mining is known to be strongly related to the engineering context.
Against this background we propose to investigate in this project a different approach based on the self-adaptive genetic algorithms arising from the research of Dr. M. Affenzeller at the Johannes Kepler University of Linz and the methodological framework for structure identification developed by Prof. del Re at the Institute of Design and Control of the same University. The target consists in automatically generating virtual sensors by identifying structural patterns in the data and deriving an essentially analytical model, focusing on the case of engine emissions. There are two key elements in this approach: the first one being is the self-adaptation of the selection pressure, which allows a dynamical and robust formulation of a test hypothesis instead of the classical a priori formulation of a test list, and the second one being the preliminary information extraction from data using statistical methods and expert knowledge which allows to concentrate on the significant data reducing the curse of dimensionality. The first of these two steps is of very general nature, while the second one must be tuned to the specific application.
First experiences have shown that the method can yield surprisingly good results (the benchmark were NOx emissions of a production diesel engine), and work is going on now with soot, but more basic work needs to be done in order to improve the practical usefulness of the method, in particular concerning computational speed and precision and the risk of premature convergence, before applied projects can be started on this basis.
In order to achieve the project's goal, the project will be carried on by three different groups:
1. The team of Dr. Affenzeller is the applicant and will concentrate on further development of the methods.
2. Prof. del Re and his team will be responsible for the "deployment context" (which includes also the data pre-processing).
3. Dr. Steinmaurer and the LCM will be responsible for data provision and practical assessment.