Sunday, May 16, 2010
Saturday, May 15, 2010
Friday, May 14, 2010
Computer simulation
A computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulations can be used to explore and gain new insights into new technology, and to estimate the performance of systems too complex for analytical solutions. [1]
Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program; [2] a 1-billion-atom model of material deformation (2002); a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005;[3] and the Blue Brain project at EPFL (Switzerland), began in May 2005, to create the first computer simulation of the entire human brain, right down to the molecular level. [4]
Simulation versus modeling
Traditionally, forming large models of systems has been via a mathematical model, which attempts to find analytical solutions to problems and thereby enable the prediction of the behavior of the system from a set of parameters and initial conditions.
While computer simulations might use some algorithms from purely mathematical models, computers can combine simulations with reality or actual events, such as generating input responses, to simulate test subjects who are no longer present. Whereas the missing test subjects are being modeled/simulated, the system they use could be the actual equipment, revealing performance limits or defects in long-term use by these simulated users.
Note that the term computer simulation is broader than computer modeling, which implies that all aspects are being modeled in the computer representation. However, computer simulation also includes generating inputs from simulated users to run actual computer software or equipment, with only part of the system being modeled: an example would be flight simulators which can run machines as well as actual flight software.
Computer simulations are used in many fields, including science, technology, entertainment, and business planning and scheduling.
History
Computer simulation was developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation; the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. Computer models were initially used as a supplement for other arguments, but their use later became rather widesprea
Data preparation
Main article: Simulation language
The data input/output for the simulation can be either through formatted textfiles or a pre- and postprocessor.
Data preparation is possibly the most important aspect of computer simulation. Since the simulation is digital with the inherent necessity of rounding/truncation error, even small errors in the original data can accumulate into substantial error later in the simulation. While all computer analysis is subject to the "GIGO" (garbage in, garbage out) restriction, this is especially true of digital simulation. Indeed, it was the observation of this inherent, cumulative error, for digital systems that is the origin of chaos theory.
Types
Computer models can be classified according to several independent pairs of attributes, including:
Stochastic or deterministic (and as a special case of deterministic, chaotic) - see External links below for examples of stochastic vs. deterministic simulations
Steady-state or dynamic
Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
Local or distributed.
Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
Dynamic simulations model changes in a system in response to (usually changing) input signals.
Stochastic models use random number generators to model chance or random events;
A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It's often more important to be able to access the data produced by the simulation, to discover logic defects in the design, or the sequence of events.
A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations, and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
A special type of discrete simulation which does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent's state is updated from one time-step to the next.
Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).
[edit] CGI computer simulation
Formerly, the output data from a computer simulation was sometimes presented in a table, or a matrix, showing how data was affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models; however, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers couldn't necessarily read out numbers, or spout math formulas, from observing a moving weather chart, they might be able to predict events (and "see that rain was headed their way"), much faster than scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change, during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.
Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.
Computer simulation in science
Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
a numerical simulation of differential equations which cannot be solved analytically, theories which involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g. climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
a stochastic simulation, typically used for discrete systems where events occur probabilistically, and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
Specific examples of computer simulations follow:
statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting .
agent based simulation has been used effectively in ecology, where it is often called individual based modeling and has been used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
computer simulations have also been used to formally model theories of human cognition and performance, e.g. ACT-R
computer simulation using molecular modeling for drug discovery
Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. There are one-, two- and three- dimensional models used. A one dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows one to simplify this complex subject to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.
Simulation environments for physics and engineering
Graphical environments to design simulations have been developed. Special care was taken to handle events (situations in which the simulation equations are not valid and have to be changed). The open project Open Source Physics was started to develop reusable libraries for simulations in Java, together with Easy Java Simulations, a complete graphical environment that generates code based on these libraries.
Computer simulation in practical contexts
Computer simulations are used in a wide variety of practical contexts, such as:
analysis of air pollutant dispersion using atmospheric dispersion modeling
design of complex systems such as aircraft and also logistics systems.
design of Noise barriers to effect roadway noise mitigation
flight simulators to train pilots
weather forecasting
Simulation of other computers is emulation.
forecasting of prices on financial markets (for example Adaptive Modeler)
behavior of structures (such as buildings and industrial parts) under stress and other conditions
design of industrial processes, such as chemical processing plants
Strategic Management and Organizational Studies
Reservoir simulation for the petroleum engineering to model the subsurface reservoir
Process Engineering Simulation tools.
Robot simulators for the design of robots and robot control algorithms
Urban Simulation Models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies. See a more detailed article on Urban Environment Simulation.
Traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network, for transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
modeling car crashes to test safety mechanisms in new vehicle models
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human in the loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard if not impossible to reproduce exactly.
Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build a unique prototype and test it. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.[5]
Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time e.g. in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.
Pitfalls
Although sometimes ignored in computer simulations, it is very important to perform sensitivity analysis to ensure that the accuracy of the results are properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (i.e. the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.
Model Calibration Techniques
The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios but in order to accurately model actual case studies, it has to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.
Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behaviors such as when and how long it takes a driver to change lanes, how much distance a driver leaves between itself and the car in front of it, and how quickly it starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match up with characteristics observed in the field at the study location. Most traffic models will have typical default values but they may need to be adjusted to better match the driver behavior at the location being studied.
Model verification is achieved by obtaining output data from the model and comparing it to what is expected from the input data. For example in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to if any congestion exists. This is why model verification is a very important part of the modeling process.
The final step is to validate the model by comparing the results with what’s expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R2 statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R2 value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values are drastically different than historical values, it probably means there’s an error in the model. This is an important step to verify before using the model as a base to produce additional models for different scenarios to ensure each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.
Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as the central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.
Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result of this, in may be necessary to adjust the model methodology to make results more consistent.
In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories but are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the functions for which they are intended. The validation process establishes the credibility of the model by demonstrating its ability to replicate actual traffic patterns. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example in traffic analysis, it is typically common for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.
http://en.wikipedia.org/wiki/Computer_simulation
Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program; [2] a 1-billion-atom model of material deformation (2002); a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005;[3] and the Blue Brain project at EPFL (Switzerland), began in May 2005, to create the first computer simulation of the entire human brain, right down to the molecular level. [4]
Simulation versus modeling
Traditionally, forming large models of systems has been via a mathematical model, which attempts to find analytical solutions to problems and thereby enable the prediction of the behavior of the system from a set of parameters and initial conditions.
While computer simulations might use some algorithms from purely mathematical models, computers can combine simulations with reality or actual events, such as generating input responses, to simulate test subjects who are no longer present. Whereas the missing test subjects are being modeled/simulated, the system they use could be the actual equipment, revealing performance limits or defects in long-term use by these simulated users.
Note that the term computer simulation is broader than computer modeling, which implies that all aspects are being modeled in the computer representation. However, computer simulation also includes generating inputs from simulated users to run actual computer software or equipment, with only part of the system being modeled: an example would be flight simulators which can run machines as well as actual flight software.
Computer simulations are used in many fields, including science, technology, entertainment, and business planning and scheduling.
History
Computer simulation was developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation; the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. Computer models were initially used as a supplement for other arguments, but their use later became rather widesprea
Data preparation
Main article: Simulation language
The data input/output for the simulation can be either through formatted textfiles or a pre- and postprocessor.
Data preparation is possibly the most important aspect of computer simulation. Since the simulation is digital with the inherent necessity of rounding/truncation error, even small errors in the original data can accumulate into substantial error later in the simulation. While all computer analysis is subject to the "GIGO" (garbage in, garbage out) restriction, this is especially true of digital simulation. Indeed, it was the observation of this inherent, cumulative error, for digital systems that is the origin of chaos theory.
Types
Computer models can be classified according to several independent pairs of attributes, including:
Stochastic or deterministic (and as a special case of deterministic, chaotic) - see External links below for examples of stochastic vs. deterministic simulations
Steady-state or dynamic
Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
Local or distributed.
Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
Dynamic simulations model changes in a system in response to (usually changing) input signals.
Stochastic models use random number generators to model chance or random events;
A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It's often more important to be able to access the data produced by the simulation, to discover logic defects in the design, or the sequence of events.
A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations, and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
A special type of discrete simulation which does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent's state is updated from one time-step to the next.
Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).
[edit] CGI computer simulation
Formerly, the output data from a computer simulation was sometimes presented in a table, or a matrix, showing how data was affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models; however, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers couldn't necessarily read out numbers, or spout math formulas, from observing a moving weather chart, they might be able to predict events (and "see that rain was headed their way"), much faster than scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change, during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.
Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.
Computer simulation in science
Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
a numerical simulation of differential equations which cannot be solved analytically, theories which involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g. climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
a stochastic simulation, typically used for discrete systems where events occur probabilistically, and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
Specific examples of computer simulations follow:
statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting .
agent based simulation has been used effectively in ecology, where it is often called individual based modeling and has been used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
computer simulations have also been used to formally model theories of human cognition and performance, e.g. ACT-R
computer simulation using molecular modeling for drug discovery
Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. There are one-, two- and three- dimensional models used. A one dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows one to simplify this complex subject to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.
Simulation environments for physics and engineering
Graphical environments to design simulations have been developed. Special care was taken to handle events (situations in which the simulation equations are not valid and have to be changed). The open project Open Source Physics was started to develop reusable libraries for simulations in Java, together with Easy Java Simulations, a complete graphical environment that generates code based on these libraries.
Computer simulation in practical contexts
Computer simulations are used in a wide variety of practical contexts, such as:
analysis of air pollutant dispersion using atmospheric dispersion modeling
design of complex systems such as aircraft and also logistics systems.
design of Noise barriers to effect roadway noise mitigation
flight simulators to train pilots
weather forecasting
Simulation of other computers is emulation.
forecasting of prices on financial markets (for example Adaptive Modeler)
behavior of structures (such as buildings and industrial parts) under stress and other conditions
design of industrial processes, such as chemical processing plants
Strategic Management and Organizational Studies
Reservoir simulation for the petroleum engineering to model the subsurface reservoir
Process Engineering Simulation tools.
Robot simulators for the design of robots and robot control algorithms
Urban Simulation Models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies. See a more detailed article on Urban Environment Simulation.
Traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network, for transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
modeling car crashes to test safety mechanisms in new vehicle models
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human in the loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard if not impossible to reproduce exactly.
Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build a unique prototype and test it. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.[5]
Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time e.g. in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.
Pitfalls
Although sometimes ignored in computer simulations, it is very important to perform sensitivity analysis to ensure that the accuracy of the results are properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (i.e. the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.
Model Calibration Techniques
The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios but in order to accurately model actual case studies, it has to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.
Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behaviors such as when and how long it takes a driver to change lanes, how much distance a driver leaves between itself and the car in front of it, and how quickly it starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match up with characteristics observed in the field at the study location. Most traffic models will have typical default values but they may need to be adjusted to better match the driver behavior at the location being studied.
Model verification is achieved by obtaining output data from the model and comparing it to what is expected from the input data. For example in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to if any congestion exists. This is why model verification is a very important part of the modeling process.
The final step is to validate the model by comparing the results with what’s expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R2 statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R2 value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values are drastically different than historical values, it probably means there’s an error in the model. This is an important step to verify before using the model as a base to produce additional models for different scenarios to ensure each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.
Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as the central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.
Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result of this, in may be necessary to adjust the model methodology to make results more consistent.
In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories but are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the functions for which they are intended. The validation process establishes the credibility of the model by demonstrating its ability to replicate actual traffic patterns. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example in traffic analysis, it is typically common for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.
http://en.wikipedia.org/wiki/Computer_simulation
Industrial Engineers
Nature of the Work
Industrial engineers determine the most effective ways for an organization to use the basic factors of production -- people, machines, materials, information, and energy. They bridge the gap between management and operations, and are more concerned with people and methods of business organization than are engineers in other specialties, who generally work more with products or processes.
To solve organizational, production, and related problems most efficiently, industrial engineers design data processing systems and apply mathematical analysis such as operations research. They also develop management control systems to aid in financial planning and cost analysis, design production planning control systems to coordinate activities and control product quality, and design or improve systems for the physical distribution of goods and services. Industrial engineers conduct surveys to find plant locations with the best combination of raw materials, transportation, and taxes. They also develop wage and salary administration systems and job evaluation programs. Many industrial engineers move into management positions because the work is closely related.
Employment
Industrial engineers held about 121,000 jobs in 1990; over 4 out of 5 jobs were in manufacturing industries. Because their skills can be used in almost any type of organization, industrial engineers are more widely distributed among industries than other engineers. For example, some even work for insurance companies, banks, hospitals, and retail organizations. Some work for government agencies or are independent consultants.
Job Outlook
Employment opportunities for industrial engineers are expected to be good; their employment is expected to grow faster than average for all occupations through the year 2000. Most job openings, however, will result from the need to replace industrial engineers who transfer to other occupations or leave the labor force.
Industrial growth, more complex business operations, and the greater use of automation both in factories and in offices underlie the projected employment growth. Jobs also will be created as firms seek to reduce costs and increase productivity through scientific management and safety engineering.
Sources of Additional Information
Institute of Industrial Engineers, Inc., 25 Technology Park/ Atlanta, Norcross, GA 30092.
Get Free Content at http://www.ContentMart.com
Industrial engineers determine the most effective ways for an organization to use the basic factors of production -- people, machines, materials, information, and energy. They bridge the gap between management and operations, and are more concerned with people and methods of business organization than are engineers in other specialties, who generally work more with products or processes.
To solve organizational, production, and related problems most efficiently, industrial engineers design data processing systems and apply mathematical analysis such as operations research. They also develop management control systems to aid in financial planning and cost analysis, design production planning control systems to coordinate activities and control product quality, and design or improve systems for the physical distribution of goods and services. Industrial engineers conduct surveys to find plant locations with the best combination of raw materials, transportation, and taxes. They also develop wage and salary administration systems and job evaluation programs. Many industrial engineers move into management positions because the work is closely related.
Employment
Industrial engineers held about 121,000 jobs in 1990; over 4 out of 5 jobs were in manufacturing industries. Because their skills can be used in almost any type of organization, industrial engineers are more widely distributed among industries than other engineers. For example, some even work for insurance companies, banks, hospitals, and retail organizations. Some work for government agencies or are independent consultants.
Job Outlook
Employment opportunities for industrial engineers are expected to be good; their employment is expected to grow faster than average for all occupations through the year 2000. Most job openings, however, will result from the need to replace industrial engineers who transfer to other occupations or leave the labor force.
Industrial growth, more complex business operations, and the greater use of automation both in factories and in offices underlie the projected employment growth. Jobs also will be created as firms seek to reduce costs and increase productivity through scientific management and safety engineering.
Sources of Additional Information
Institute of Industrial Engineers, Inc., 25 Technology Park/ Atlanta, Norcross, GA 30092.
Get Free Content at http://www.ContentMart.com
Electronic Book Reader
Electronic Book Reader
Avid readers may have been hesitant about looking at the Kindle eBook Reader because they’re used to the feel and looks of paper books and can’t imagine reading a book that doesn’t fit those criteria? Thanks to the amazing new advances in the Electronic Book Reader, that excuse isn’t valid anymore.
Book Reader Electronic
The Kindle eBook Reader is now thinner, lighter and aesthetically pleasing – especially targeted to those who love the feel and mobility of paper-made books and have shunned the Electronic Book Reader thus far. The Kindle eBook Reader’s revolutionary design and capabilities will make it a “must-have” for anyone who loves to read books or for those who enjoy scanning the latest newspapers and magazines.
Book Reader Electronic
A high-resolution screen that mimics real paper provides a clear and sharply defined page that is completely wireless and requires no cables or other devices that would make this Electronic Book Reader bulky or difficult to read. You can even shop directly from the Kindle eBook Reader, downloading books, magazines and blogs in less than a minute.
Have you ever bought a high-priced book only to find that it’s not really what you thought it would be? Now you’re stuck with it, either catching dust on a shelf or passing it along in a garage sale for less than half of what you paid for it. The Kindle eBook Reader allows you to download book samples for free – and then decide if you want to buy it.
Best of all, when you use the Kindle eBook Reader you won’t have to find a “hot spot” as you do with WiFi setups. This Electronic Book Reader is structured like technically advanced cell phones, so your reading material is always available to you. It’s completely mobile – but with no contracts or monthly bills to deal with.
The Kindle eBook Reader Lets You Carry a Library
An entire library is just a click away on your Kindle eBook Reader. Books, magazines, newspapers, blogs, and your own documents and photos can be carried around in a device the size of one book – and it’s lighter in weight. And, no longer do you have to search several places to find a best-selling book or magazine that might be out of stock. It’s readily available to you on your Electronic Book Reader.
The Kindle eBook Reader even lets you carry around a dictionary – The New Oxford American Dictionary – carrying over 250,000 definitions. Imagine – an Electronic Book Reader with all these features.
http://www.wireless-reading-device-reviews.com/electronic-book-reader/
Avid readers may have been hesitant about looking at the Kindle eBook Reader because they’re used to the feel and looks of paper books and can’t imagine reading a book that doesn’t fit those criteria? Thanks to the amazing new advances in the Electronic Book Reader, that excuse isn’t valid anymore.
Book Reader Electronic
The Kindle eBook Reader is now thinner, lighter and aesthetically pleasing – especially targeted to those who love the feel and mobility of paper-made books and have shunned the Electronic Book Reader thus far. The Kindle eBook Reader’s revolutionary design and capabilities will make it a “must-have” for anyone who loves to read books or for those who enjoy scanning the latest newspapers and magazines.
Book Reader Electronic
A high-resolution screen that mimics real paper provides a clear and sharply defined page that is completely wireless and requires no cables or other devices that would make this Electronic Book Reader bulky or difficult to read. You can even shop directly from the Kindle eBook Reader, downloading books, magazines and blogs in less than a minute.
Have you ever bought a high-priced book only to find that it’s not really what you thought it would be? Now you’re stuck with it, either catching dust on a shelf or passing it along in a garage sale for less than half of what you paid for it. The Kindle eBook Reader allows you to download book samples for free – and then decide if you want to buy it.
Best of all, when you use the Kindle eBook Reader you won’t have to find a “hot spot” as you do with WiFi setups. This Electronic Book Reader is structured like technically advanced cell phones, so your reading material is always available to you. It’s completely mobile – but with no contracts or monthly bills to deal with.
The Kindle eBook Reader Lets You Carry a Library
An entire library is just a click away on your Kindle eBook Reader. Books, magazines, newspapers, blogs, and your own documents and photos can be carried around in a device the size of one book – and it’s lighter in weight. And, no longer do you have to search several places to find a best-selling book or magazine that might be out of stock. It’s readily available to you on your Electronic Book Reader.
The Kindle eBook Reader even lets you carry around a dictionary – The New Oxford American Dictionary – carrying over 250,000 definitions. Imagine – an Electronic Book Reader with all these features.
http://www.wireless-reading-device-reviews.com/electronic-book-reader/
Operations Research Analysts
Nature of the Work
Organizations develop their own ways of making and carrying out plans. Unfortunately, these processes are not always the best way in light of the organization’s overall goals. Operations research analysts help organizations plan and operate in the most efficient and effective manner. They accomplish this by applying the scientific method and mathematical principles to organizational problems so that managers can evaluate alternatives and choose the course of action that best suits the organization.
Operations research analysts are problem solvers. The problems they tackle are for the most part those encountered in large business organizations: Business strategy, forecasting, resource allocation, facilities layout, inventory control, personnel schedules, and distribution systems.
The method they use generally revolves about a mathematical model or set of equations that explains how things happen within the organization. Models are simplified representations that enable the analyst to break down systems into their component parts, assign numerical values to each component, and examine the mathematical relationships between them. These values can be altered to determine what will happen to the system under different sets of circumstances. Different types of models include simulation, linear programming, and game theory models. Because many of these techniques have been computerized, analysts need to be able to writer computer programs or use existing ones.
The type of problem they usually handle varies by industry. For example, an analyst in a bank might deal with branch location, check processing, and personnel schedules, while an analyst employed by a hospital would concentrate on a different set of problems - - scheduling admissions, managing patient flow, assigning shifts, monitoring use of pharmacy and laboratory services, or forecasting demand for new hospital services.
The role of the operations research analyst varies according to the structure and management philosophy of the firm. Some firms centralize operations research in one department; others disperse operations research personnel throughout all divisions of the firm. Moreover, some operations research analysts specialize in one type of application; others are generalists.
The degree of supervision also varies by organizational structure. In some organizations, analysts have a great deal of professional autonomy; in others, analysts are more closely supervised. Operations research analysts work closely with managers, who have a wide variety of support needs. Analysts must adapt their work to reflect these requirements.
Regardless of the industry or structure of the organization, operations research entails a similar set of procedures. Managers begin the process by describing the symptoms of a problem to the analyst. The analyst then defines the problem, which sometimes is general in nature and at other times specific. For example, an operations research analyst for an auto manufacturer may want to determine the best inventory level for each of the materials for a new production process or, more specifically, to determine just how much steel should be stocked.
After analysts define the problem, they learn everything they can about it. They research the problem, then break it into its component parts. Then they gather information about each of these parts. Usually this involves consulting a wide variety of personnel. To determine the most efficient amount of steel to be kept on hand, for example, operations research analysts might talk with engineers about production levels; discuss purchasing arrangements with industrial buyers; and examine data on storage costs provided by the accounting department.
With this information in hand, the operations research analyst is ready to select the most appropriate analytical technique. There may be several techniques that could be used, or there may be one standard model or technique that is used in all instances. In a few cases, the analyst must construct an original model to examine and explain the system. In almost all cases, the selected model must be modified to reflect the specific circumstances of the situation.
A model for the inventory of steel, for example, might take into account the amount of steel required to produce a unit of output, several projected levels of output, varying costs of steel, and storage costs. The analyst chooses the values for these variables, enters them into the computer, which has already been programmed to make the calculations required, and runs the program to produce the best inventory level consistent with several sets of assumptions. The analyst would probably design a model that would take into account wide variations in the different variables.
At this point, the operations research analyst presents the final work to management along with recommendations based on the results of the analysis. The manager, who is the decisionmaker, may request additional runs based on different assumptions to help in making the final decision. Managers assume responsibility for the final decision, but once a decision has been reached, the analyst works with the staff to ensure its successful implementation.
Working Conditions
Operations research analysts generally work regular hours in an office environment. Usually they work on projects that are of immediate interest to management. In these circumstances, analysts often are under pressure to meet deadlines and may work more than a 40-hour week. The work is sedentary in nature, and very little physical strength or stamina is required.
Employment
Operations research analysts held about 57,000 jobs in 1990. They are employed in most industries. Major employers include manufacturers of chemicals, machinery, and transportation equipment; firms providing transportation and telecommunications services; public utilities; banks; insurance agencies; and government agencies at all levels. Some analysts work for management consulting agencies that develop operations research applications for firms that do not have an in-house operations research staff.
Most analysts in the Federal government work for the Armed Forces.
Training, Other Qualifications, and Advancement
Employers look for college graduates who have a strong background in quantitative methods with exposure to computer programming. Employers prefer applicants with a graduate degree in operations research or management science, mathematics, statistics, business administration, computer science, or other quantitative disciplines.
Regardless of education background or prior work experience, the employer usually plays a large role in the training process. New workers typically participate in on-the-job training programs, working closely with experienced workers until they become proficient. Generally, they help senior analysts gather information and run computer programs. The organization also sponsors skill-improvement training for experienced workers, helping them keep up with new developments in operations research techniques as well as advances in computer science. Some analysts attend college and university classes on these subjects. Operations research analysts must be able to think logically and work well with people. Thus, employers prefer workers with good oral and written communications skills. The computer is an increasingly important tool for quantitative analysis, and programming experience is a must.
Beginning analysts usually do routine work under the close supervision of experienced analysts. As they gain knowledge and experience, they are assigned more complex tasks, with greater autonomy to design models and solve problems. Operations research analysts advance by assuming positions as technical specialists or supervisors. The skills acquired by operations research analysts are useful for upper level jobs in an organization, and experienced analysts with leadership potential often leave the field altogether to assume nontechnical managerial or administrative positions.
Job Outlook
Employment of operations research analysts is expected to grow much faster than the average for all occupations through the year 2000 due to the increasing importance of quantitative analysis in decisionmaking. In addition to jobs arising from the increased demand for these workers, many openings will occur each year as workers transfer to other occupations or leave the labor force altogether.
More and more organizations are using operations research techniques to improve productivity and reduce costs. This reflects growing acceptance of a systematic approach to decisionmaking as well as more affordable computers, which give even small firms access to operations research applications. The interplay of these two trends should greatly stimulate demand for these workers in the years ahead.
Much of the job growth is expected to occur in the trade and services sectors. Firms in these sectors recognize that quantitative analysis can achieve dramatic improvements in operation efficiency and profitability. More retailers, for example, are using operations research to design store layouts, select the best store location, analyze customer characteristics, and control inventory, among other things. Motel chains are beginning to utilize operations research analysis to improve their efficiency. For example, they analyze automobile traffic patterns and customer attitudes to determine location, size, and style of new motels. Lie other management support functions, operations research is spread by its own success. When one firm in an industry increases productivity by adopting a new procedure, its competitors usually follow. This competitive pressure will contribute to demand for operations research analysts.
Demand also should be strong in the manufacturing sector as firms expand existing operations research staff in the face of growing foreign competition. More and more manufacturers are suing mathematical models to study parts of the organization for the first time. For example, analysts will be needed to determine the best way to distribute finished products and to find out where sales offices should be based. In addition, increasing factory automation will require more operations research analysts to alter existing models or develop new ones for production layout, robotics installation, work schedules, and inventory control.
Little change is expected in the number of operations research analysts working for the Federal Government.
Earnings
Median annual earnings for operations research analysts were about $35,000 a year in 1990; the middle 50 percent earned between $26,000 and $43,600 annually. The top 10 percent earned over $53,000; the bottom 10 percent earned less than $20,800 a year.
In the Federal Government, the starting annual salary for operations research analysts was about $16,600 in 1990. Candidates with a superior academic record could begin at $19,700. Operations research analysts employed by the Federal Government averaged about $46,800 a year in 1990.
Related Occupations
Operations research analysts apply mathematical principles to organizational problems. Workers in other occupations that stress quantitative analysis include computer scientists, applied mathematicians, statisticians, and economists.
Sources of Additional Information
Information on career opportunities for operations research analysts are available form:
The Operations Research Society of America, 428 East Preston St., Baltimore, MD 21202.
The Institute for Management Science, 290 Westminster St., Providence, RI 02903.
For information on careers in the Armed Forces and Department of Defense, contact:
Military Operations Research Society, 101 South Whiting St., Suite 202, Alexandria, VA 22304.
Get Free Content at http://www.ContentMart.com
Organizations develop their own ways of making and carrying out plans. Unfortunately, these processes are not always the best way in light of the organization’s overall goals. Operations research analysts help organizations plan and operate in the most efficient and effective manner. They accomplish this by applying the scientific method and mathematical principles to organizational problems so that managers can evaluate alternatives and choose the course of action that best suits the organization.
Operations research analysts are problem solvers. The problems they tackle are for the most part those encountered in large business organizations: Business strategy, forecasting, resource allocation, facilities layout, inventory control, personnel schedules, and distribution systems.
The method they use generally revolves about a mathematical model or set of equations that explains how things happen within the organization. Models are simplified representations that enable the analyst to break down systems into their component parts, assign numerical values to each component, and examine the mathematical relationships between them. These values can be altered to determine what will happen to the system under different sets of circumstances. Different types of models include simulation, linear programming, and game theory models. Because many of these techniques have been computerized, analysts need to be able to writer computer programs or use existing ones.
The type of problem they usually handle varies by industry. For example, an analyst in a bank might deal with branch location, check processing, and personnel schedules, while an analyst employed by a hospital would concentrate on a different set of problems - - scheduling admissions, managing patient flow, assigning shifts, monitoring use of pharmacy and laboratory services, or forecasting demand for new hospital services.
The role of the operations research analyst varies according to the structure and management philosophy of the firm. Some firms centralize operations research in one department; others disperse operations research personnel throughout all divisions of the firm. Moreover, some operations research analysts specialize in one type of application; others are generalists.
The degree of supervision also varies by organizational structure. In some organizations, analysts have a great deal of professional autonomy; in others, analysts are more closely supervised. Operations research analysts work closely with managers, who have a wide variety of support needs. Analysts must adapt their work to reflect these requirements.
Regardless of the industry or structure of the organization, operations research entails a similar set of procedures. Managers begin the process by describing the symptoms of a problem to the analyst. The analyst then defines the problem, which sometimes is general in nature and at other times specific. For example, an operations research analyst for an auto manufacturer may want to determine the best inventory level for each of the materials for a new production process or, more specifically, to determine just how much steel should be stocked.
After analysts define the problem, they learn everything they can about it. They research the problem, then break it into its component parts. Then they gather information about each of these parts. Usually this involves consulting a wide variety of personnel. To determine the most efficient amount of steel to be kept on hand, for example, operations research analysts might talk with engineers about production levels; discuss purchasing arrangements with industrial buyers; and examine data on storage costs provided by the accounting department.
With this information in hand, the operations research analyst is ready to select the most appropriate analytical technique. There may be several techniques that could be used, or there may be one standard model or technique that is used in all instances. In a few cases, the analyst must construct an original model to examine and explain the system. In almost all cases, the selected model must be modified to reflect the specific circumstances of the situation.
A model for the inventory of steel, for example, might take into account the amount of steel required to produce a unit of output, several projected levels of output, varying costs of steel, and storage costs. The analyst chooses the values for these variables, enters them into the computer, which has already been programmed to make the calculations required, and runs the program to produce the best inventory level consistent with several sets of assumptions. The analyst would probably design a model that would take into account wide variations in the different variables.
At this point, the operations research analyst presents the final work to management along with recommendations based on the results of the analysis. The manager, who is the decisionmaker, may request additional runs based on different assumptions to help in making the final decision. Managers assume responsibility for the final decision, but once a decision has been reached, the analyst works with the staff to ensure its successful implementation.
Working Conditions
Operations research analysts generally work regular hours in an office environment. Usually they work on projects that are of immediate interest to management. In these circumstances, analysts often are under pressure to meet deadlines and may work more than a 40-hour week. The work is sedentary in nature, and very little physical strength or stamina is required.
Employment
Operations research analysts held about 57,000 jobs in 1990. They are employed in most industries. Major employers include manufacturers of chemicals, machinery, and transportation equipment; firms providing transportation and telecommunications services; public utilities; banks; insurance agencies; and government agencies at all levels. Some analysts work for management consulting agencies that develop operations research applications for firms that do not have an in-house operations research staff.
Most analysts in the Federal government work for the Armed Forces.
Training, Other Qualifications, and Advancement
Employers look for college graduates who have a strong background in quantitative methods with exposure to computer programming. Employers prefer applicants with a graduate degree in operations research or management science, mathematics, statistics, business administration, computer science, or other quantitative disciplines.
Regardless of education background or prior work experience, the employer usually plays a large role in the training process. New workers typically participate in on-the-job training programs, working closely with experienced workers until they become proficient. Generally, they help senior analysts gather information and run computer programs. The organization also sponsors skill-improvement training for experienced workers, helping them keep up with new developments in operations research techniques as well as advances in computer science. Some analysts attend college and university classes on these subjects. Operations research analysts must be able to think logically and work well with people. Thus, employers prefer workers with good oral and written communications skills. The computer is an increasingly important tool for quantitative analysis, and programming experience is a must.
Beginning analysts usually do routine work under the close supervision of experienced analysts. As they gain knowledge and experience, they are assigned more complex tasks, with greater autonomy to design models and solve problems. Operations research analysts advance by assuming positions as technical specialists or supervisors. The skills acquired by operations research analysts are useful for upper level jobs in an organization, and experienced analysts with leadership potential often leave the field altogether to assume nontechnical managerial or administrative positions.
Job Outlook
Employment of operations research analysts is expected to grow much faster than the average for all occupations through the year 2000 due to the increasing importance of quantitative analysis in decisionmaking. In addition to jobs arising from the increased demand for these workers, many openings will occur each year as workers transfer to other occupations or leave the labor force altogether.
More and more organizations are using operations research techniques to improve productivity and reduce costs. This reflects growing acceptance of a systematic approach to decisionmaking as well as more affordable computers, which give even small firms access to operations research applications. The interplay of these two trends should greatly stimulate demand for these workers in the years ahead.
Much of the job growth is expected to occur in the trade and services sectors. Firms in these sectors recognize that quantitative analysis can achieve dramatic improvements in operation efficiency and profitability. More retailers, for example, are using operations research to design store layouts, select the best store location, analyze customer characteristics, and control inventory, among other things. Motel chains are beginning to utilize operations research analysis to improve their efficiency. For example, they analyze automobile traffic patterns and customer attitudes to determine location, size, and style of new motels. Lie other management support functions, operations research is spread by its own success. When one firm in an industry increases productivity by adopting a new procedure, its competitors usually follow. This competitive pressure will contribute to demand for operations research analysts.
Demand also should be strong in the manufacturing sector as firms expand existing operations research staff in the face of growing foreign competition. More and more manufacturers are suing mathematical models to study parts of the organization for the first time. For example, analysts will be needed to determine the best way to distribute finished products and to find out where sales offices should be based. In addition, increasing factory automation will require more operations research analysts to alter existing models or develop new ones for production layout, robotics installation, work schedules, and inventory control.
Little change is expected in the number of operations research analysts working for the Federal Government.
Earnings
Median annual earnings for operations research analysts were about $35,000 a year in 1990; the middle 50 percent earned between $26,000 and $43,600 annually. The top 10 percent earned over $53,000; the bottom 10 percent earned less than $20,800 a year.
In the Federal Government, the starting annual salary for operations research analysts was about $16,600 in 1990. Candidates with a superior academic record could begin at $19,700. Operations research analysts employed by the Federal Government averaged about $46,800 a year in 1990.
Related Occupations
Operations research analysts apply mathematical principles to organizational problems. Workers in other occupations that stress quantitative analysis include computer scientists, applied mathematicians, statisticians, and economists.
Sources of Additional Information
Information on career opportunities for operations research analysts are available form:
The Operations Research Society of America, 428 East Preston St., Baltimore, MD 21202.
The Institute for Management Science, 290 Westminster St., Providence, RI 02903.
For information on careers in the Armed Forces and Department of Defense, contact:
Military Operations Research Society, 101 South Whiting St., Suite 202, Alexandria, VA 22304.
Get Free Content at http://www.ContentMart.com
Subscribe to:
Posts (Atom)