Synthetic Scenario Generation for the Military Obstacle Warning System
A Masters Thesis at the Hochschule RavensburgWeingarten, performed at European Aeronautic Defense and Space Company in Immenstaat, Germany
Robbie Russell
Advisor: Prof. Dr.Ing. Holger Voos
Supervisor: Jürgen Strauss
Summer Semester 2005
Hochschule RavensburgWeingarten University of Applied Sciences
Abstract
Synthetic Scenario Generation for the Military Obstacle Warning System
Whether in a civilian or military context, helicopters are subject to a unique set of obstacles when operating at low altitudes. Power lines and poles can be particularly difficult to detect visually and are extremely dangerous to come in contact with during flight.
The Military Obstacle Warning System (MilOWS) uses an eyesafe laser to recognize thin wires, as well as poles and trees, at significant distances with a high precision. Early warning of the obstacle visually and acoustically allow a pilot to make avoidance maneuvers.
The goal of this project is to design and implement a method for simulating the MilOWS sensor in a synthetic environment. This simulation will allow the generation of data analogous to that of what a real MilOWS sensor would create in the same environment. By simulating the sensor and the objects that it interacts with – such as power lines and poles – data can be generated to test the MilOWS sensor detection algorithms well ahead of actual production and reallife testing. This syntheticallygenerated data will aid in the development of the project and conserve resources.
List of Figures iii
List of Tables v
List of Charts vi
Glossary vii
Chapter 1: Introduction
Overview of HELLAS and MilOWS 1
LIDAR 2
LADAR vs RADAR 3
Obstacle Detection 3
Chapter 2: Purpose and Goals
Goals of synthetic scenario creation 6
Synthetic Object Modeling 7
Frame Intersection with Synthetic Objects 7
Importing Trajectory Data 8
Translation of the Sensor Along the Trajectory 8
Synthetic Data Creation 8
Importing an Existing Scene 9
Chapter 3: Components
LASER and Receiver 10
Fiber Array and Oscillating Mirror 12
General Operation and Layout 13
Basics of Range Information Acquisition 16
Chapter 4: Modeling of Sensor Geometry
Mirror and Fibers 18
The Fiber α and β Angles 18
Reference Fiber 21
Motor and Transmission 21
Uncertainty in the β Start Position 24
Chapter 5: Coordinate Systems
Coordinate Transformations 25
Array Referenced Coordinates 25
Sensor Referenced Coordinates 26
Platform Referenced Coordinates 26
Transformation Using Euler Angles 28
Chapter 6: Modeling Synthetic Objects in Three Dimensions
The Basics: Ray Tracing Symmetric Objects 31
Catenary Models 36
Chapter 7: Intersection Routines
Line Intersections in Three Dimensions 42
Geometric Considerations of the Energy of the Return 43
Effects of Atmospheric Attenuation and Refraction 51
Chapter 8: Utilizing Existing Data
Data Types 53
Smoothing 53
Chapter 9: Sensor Translation Along the Trajectory
Trajectory Segment Selection 58
Chapter 10: Synthetic Data Creation
Native HELLAS Data Format 61
MilOWS Binary Data Format 63
Navigation Data Format 65
Interpolation Routine for Navigation Data 68
Chapter 11: Synthetic Objects in a Scene
Scene Coordinate Systems 70
Map Projections 70
Conversion of Latitude/Longitude to UTM 71
Trajectory Search Algorithm 74
Chapter 12: Software Implementation
Overview 76
Chapter 13: Conclusion 86
Bibliography 88
Appendix A 90
Appendix B 95
Illustrations
Page
Wire and pole objects 3
Tree objects 4
Safety Line 5
MilOWS components 13
Motor vs oscillating mirror angle plot 22
Linear approximation of motor vs osc. Mirror 23
Sphere intersection with frame 33
Differing parameters of the curve 36
Hanging wire plot 38
Catenary in 3D 39
Wire plot in 3d with UTM coordinates 41
Reflectivity vs angle of incidence 48
Cubic vs. fourth degree polynomial 49
Mie scattering simulation 52
Gaussian distribution of heading readings 54
Gaussian distribution of pitch readings 55
Pitch, roll, heading scatter plot 56
Smoothed navigation data 57
Interpolation of points 68
UTM coordinate system 71
Trajectory Plot 80
HELLAS data display 84
Synthetic wire in HellSim 85
Synthetic set of wires in HellSim 85
Drawings
Distance measurement 2
Range walk 11
Linear array and oscillating mirror 12
Principles of MilOWS 14
Frame generalization 16
Sweep 18
Sweep and scan pattern 19
Sawtooth in fibers 20
ARC Coordinates 25
HELLAS sensor installation 27
MilOWS sensor installation 28
Ray intersection with circle 32
Ray intersection with cylinder 34
Side view of hanging wire 37
Segmentation of a hanging wire 40
Shortest segment determination 42
Laser footprint intersection 1 44
Laser footprint intersection 2 45
Two adjacent catenary segments 46
Two intersections of a single ray 47
Wire pylon support 50
Translation and rotation of sensor along trajectory 59
Atmospheric anomalies 63
Search algorithm 74
Shortening of laser vector 83
1. Fiber number and corresponding alpha angle 20
2. Performance of IRS used with HELLAS 54
3. HELLAS data word layout 61
4. MilOWS data world layout 63
5. ARINC429 data word 66
6. Data words of interest for the HELLAS 67
7. Example output from HellSim 67
8. Relevant classes in the owsScenarios application 77
1. Example of program flow of owsScenarios 78
LIDAR – Light Detection and Ranging, or Laser Imaging and Ranging
LADAR – Laser Detection and Ranging
IRS – Inertial Reference System
GPS – Global Positioning System
MilOWS – Military Obstacle Warning System
HELLAS – Helicopter Laser Obstacle Warning System
HellSim – HELLAS Simulator
owsSc – Obstacle Warning System Scenarios
ARC – Array referenced coordinates
SRC – Sensor Referenced Coordinates
PRC – Platform Referenced Coordinates
Ray Tracing – A general technique from geometrical optics of studying the path taken by light by following rays of light as they interact with optical surfaces.[WKT05]
Catenary – The shape of a flexible chain or cable when supported at its ends and acted upon by a uniform gravitational force. [WKC05]
The “Military Obstacle Warning System” (“MilOWS”) sensor currently in development by EADS Deutschland, and its predecessor the “Helicopter Laser Obstacle Warning System” (“HELLAS”) are both a specialized kind of LIDAR (“Light detection and ranging” or “laser imaging detection and ranging”) system developed for commercial and military applications. The primary purpose of both systems is to safely identify dangerous obstacles during flight in near realtime with a high accuracy and low rate of false alarms. Obstacles to be identified during flight are particularly hanging wires, poles, and trees.
Obstacle collision avoidance is of high importance in any context for rotocraft. In the United States between the years of 1963 and 1997, 8,436 rotocraft accidents were reported and of that number, 1,294 were due to inflight collisions with foreign objects [NA01]. Of those, collisions with wires and poles accounted for 720 accidents, with trees adding another 205 accidents to that number. Inflight collision with foreign objects was the second leading cause of accidents in commercially manufactured rotocraft, being surpassed only by loss of engine power as the number one cause of accidents.
The HELLAS system, which is in production and currently used in several helicopter applications, is used by aviation companies such as Eurocopter, Bell, and Sikorsky Aircraft for obstacle detection. MilOWS will supersede HELLAS when it is completed.
LIDAR is a concept that is fairly easy to understand. In this context, of interest is the measurement of the distance between two objects using a pulse of laser light. The time between the propagation of the laser pulse from its source and the return of the reflected energy from the target is measured using a high speed timer, which allows the inference of the distance between the two points.
Drawing 1: Simple case of distance measurement.
In turn, with an accurate knowledge of the position, orientation, and direction of the laser source, the location of the reflected object can be determined with relative accuracy and even georeferenced. With this position knowledge, the object is assessed and it is then determined to be a threat to the aircraft or not.
LIDARs have been in use for many years in different areas of earth sciences. Atmospherics, geology, seismology, topographic surveys, and even simple speed measurement have all used LIDAR principles for many years now. In a military context, the term “LADAR” (Laser Detection and Ranging) or “Laser Radar” is often used instead, but should not be confused with radar, as laser light is being used rather than radio waves.
Using a LADAR system for detection of obstacles in this context has several advantages over that of RADAR. A pulse of LASER light has much greater density and coherency than that of a RADAR, and can offer much higher angular resolution than that of a RADAR.
Any object that is inside the flight path of an aircraft can be considered an obstacle. Scenarios created with obstacles and an aircraft can take on an infinite number of permutations due to the wide variation of obstacle form and position in a scene as well as differing possibilities of approach.
MilOWS should be able to detect and distinguish between three types of objects in near real time: WIRES, POLES, and TREES.
Illustration 1: WIRE and POLE objects
The length to width ratio exceeds 20
The visible horizontal distance to other objects is at least 5m
The wire is at least 5m above the visible ground or other objects in the foreground
POLE obstacles include high thin obstacles (such as power poles, windmills, cranes, etc) and must have a visible height exceeding 5m. POLE objects must also have a width to height ratio below 1 to 5 and must be thinner than 10m in diameter.
Illustration 2: Tree Objects
The classification of obstacles into these three classes takes place in the 3D space of range image data and is mapped into the real world. Obstacle indications are generated visually and audibly by MilOWS based on these three classes and are presented to the user.
Illustration 3: Safety line overlay
Obstacles that do not fall into one of these three categories will not generate an obstacle indication. In addition to identifying and displaying these obstacles, a “safety line” is generated that indicates the safest minimum distance from the detected obstacles given the current aircraft trajectory at any given moment.
The ultimate goal of this project is to simulate MilOWS as closely as possible, such that one can take this simulated sensor and create a synthetic environment to place it in. Once the synthetic environment is created, objects must be placed in this environment and the simulated sensor “turned on”. It must, in essence, “see” what a sensor in the real world would if it encountered the same type of object in reality.
There are several benefits of this type of data simulation. The creation of synthetic scenario data would allow some types of early testing without using the actual hardware. Also, synthetically generated data can be used to provide proof for detection probability without having actually fly the hardware on an aerial platform; furthermore, synthetic data is much less expensive to generate than real data acquired during a flight.
Once the sensor is modeled and a synthetic environment is created, it should be possible to use an existing flight trajectory and place synthetic objects along it. Next, the sensor would be “flown” along said path and create a set of data that would resemble that of a realworld scenario. It should also be possible to take an existing set of data and merge the synthetic objects into it, while preserving the quality of the original data set.
There are multiple challenges associated with this type of simulation. For example, the following questions must be answered:
What are the geometric and statistical parameters of the sensor that need to be simulated?
What are the limitations of the sensor?
Where do the sources of noise and error originate?
How can the identified noise sources be simulated?
How should the synthetic objects be modeled in an environment?
Also, the platform that the sensor is installed in must have some influence on the performance of the sensor – particularly with the quality of the navigation data supplied by the platform to MilOWS.
There are three separate entities that need to be modeled to accomplish the goal set out : The MilOWS sensor itself, the ancillary data sources (navigation and location data source), and the synthetic world and objects.
The objects selected to be modeled in this experiment will be hanging electric cables, the support poles or pylons, and trees. The existing detection algorithms will be tested against the synthetic objects and must show a similar performance as the realworld corresponding objects. Aside from single hanging cables, combinations of several wires resembling more complex pylon structures should also be modeled.
In order to create synthetic data for analysis, the intersections between each range measurement from the modeled sensor and the synthetic objects in the scene must be calculated. This indicates that the geometry and coordinate system of each object must be well known, and routines designed for intersection detection with each type of object. The surface properties of the synthetic object must resemble that of a realworld object with differing energy returns based on reflectivity, angle of incidence, beam footprint on the object, and atmospheric attenuation all taken into consideration. Each synthetic object will be treated as a separate case with unique detection routines and properties.
It is important that a realistic trajectory is used which reflects the characteristics of a realworld platform. Therefore, for initial usage, trajectories recorded on previous experiments with the target platform will be used as the path of the sensor when generating synthetic data. This is important since modeling a real world platform, such as a helicopter, is complex and requires that all of the performance variables are known. Using an existing trajectory will automatically incorporate the performance characteristics of the target platform, albeit with an added noise in the data.
Some subset of the data will have to be smoothed and an interpolation routine used to extract more meaningful location information, as will be shown later.
After the importing of the trajectory, the sensor must be translated and rotated along this path using the corresponding attitude and location information for each range measurement taken. Because the rate at which range measurements are made is much higher than the rate at which the real trajectory data was recorded, each of the values for roll, pitch, yaw, latitude, longitude, and altitude must be interpolated using a predictive interpolation routine.
Once the sensor has been translated and rotated along the imported trajectory, and all intersections with manually placed synthetic objects are calculated, the data must be exported into a format that exactly duplicates that of the native HELLAS or MilOWS binary file formats. Because of some differences between the HELLAS and MilOWS internals, some workarounds must be implemented to produce a data set that can be read by the existing HellSim program (an EADS utility used to replay sensor data) when simulating a MilOWS sensor rather than a HELLAS one. The differences between the two sensors will be illustrated later.
Lastly, it is desired to have the ability to place synthetic objects into an existing realworld scene and produce a merged output that can then be read by existing tools.
The most fundamental parts of the MilOWS LADAR are the laser and the timing device that is used for measuring ranges.
The laser itself operates in the infrared area of the spectrum at an eye safe level, which is one of the requirements of the sensor. It operates at a pulse repetition rate near 61kHz and produces laser pulses about 5ns wide. Infrared lasers have the added benefit of not being detectable with the naked human eye, and are therefore more stealthy in military applications.
The timing electronics operate at a frequency of 250Mhz, which leads to a resolution T_{res} of
(3.1)
Thus, the uncertainty σ_{rms} associated with each range measurement corresponds to an uncertainty in an objects horizontal location of
(3.2)
(3.3)
using the values of and for the speed of light in a vacuum and the refractive index respectively.
Without some form of automatic gain control of the receiving photodiode, the system would also be subject to another type of error called “range walk”. This error occurs whenever a threshold level is being used to trigger on an event. Range walk is the manifestation of simple concept – varying return energies reflected from the detected object will change the range measurement. Varying energies occur regularly outside of a laboratory environment; for example, a black asphalt surface or a dark field will return considerably less photons to the receiver than a regularly shaped reflective building face or sandy area on the ground.
The following diagram illustrates this effect.
Drawing 2: Comparison of returns with differing energy levels
Aside from the laser, photodiode, and timing hardware, the next components that are of secondary interest are the fiber array and oscillating mirror.
The fiber array is an arrangement by which a linear scan of the target area can be made. The fibers are spaced evenly from each other and sweep through an angle determined by the system type. For MilOWS, this value is 42 degrees.
Drawing 3: Linear Fiber Array and Oscillating Mirror
While the fiber angle α is being swept though, the oscillating mirror is also rotating back and forth through an angle β. This angle β (on MilOWS) has a value of 36 degrees. For each range measurement, it is imperative that the angles α and β be recorded as well to reconstruct the frame.
Illustration 4: Components of MilOWS System, EADS Deutschland.
The preceding image illustrates the compact nature of MilOWS and the relative locations of each of the components. The components that are visible and of interest are:
Fiber scanner – a group of fibers arranged in a circle are scanned by a laser which is reflected from a revolving nutating mirror.
Scanner array – the circular group of fibers is rearranged into a linear array.
Reference fiber – the length of this fiber is measured during each frame acquisition and is used to determine the operational characteristics of the photodiode during operation and permanent range adjustment.
Lenses – the laser pulses propagate from the linear fiber array through the lenses and towards the oscillating mirror.
Drawing 4: Principles of MilOWS system, EADS Deutschland.
Oscillating mirror – this mirror bounces the laser beams towards the target while oscillating back and forth. The reflected energy from the target is then directed via this mirror back through the lenses, where it is detected by the photodiode.The operation of HELLAS and MilOWS can be described by the previous drawing and characterized by the following states:
A laser pulse is emitted and the “start time” is recorded. The laser pulse emitted is a few nanoseconds in duration.
The pulse is routed via a fiber to the nutating mirror, which is scanning across the circular array of fibers. One of 128 fibers in the array continues the light path, which is then translated into the linear array.
The short pulse of laser light passes through the lenses and is reflected by the oscillating mirror onto the target.
The photons reflected from the target are directed back by the oscillating mirror through an identical set of linear fibers, and onwards to another nutating mirror and finally through an optical filter and a receiver diode.
The timer is stopped and the “stop time” is noted.
The calculated distance measurement is combined with information about the position of the oscillating mirror, the fiber number that the pulse propagated through, the orientation and location of the aircraft, as well as several other neighboring data points. This combination of data allows a processing algorithm to determine the existence and location of an obstacle relative to the aircraft.
Each time a distance, or “range measurement”, is recorded it is added to a set of points known as a frame. Each frame consists of several range measurements acquired at high speed, which must then be processed with a specialized algorithm that determines if an object of interest is in the field of view of the sensor.
Drawing 5: A generalization of a frame, with each laser range being subject to the changing fiber number and oscillating mirror angle
Of particular interest in this paper are the characteristics of the inner workings of this system  especially the statistics associated with the receiver diode (optical background, receiver diode dark current, and noise of electronics all contribute) and the uncertainties associated with the timing electronics and the knowledge of aircraft attitude and location, as well as the geometry of synthetic frames and synthetic objects placed within those frames.
For clarity, it must be noted that unless otherwise stated, all references to components and features will be referring to MilOWS rather than HELLAS.
Identification of a potential target begins with the acquisition of sensory information from the surrounding environment by the MilOWS system. This sensory information includes:
Laser range information
Attitude of the sensor
Sensor position
Sensor attitude and position are received from an Inertial Reference System (IRS) and Global Positioning System (GPS) respectively.
Information from the pulse timeofflight, position of the nutating mirror (and thus the active fiber), and position of the oscillating mirror are combined with the sensor attitude and position information to create a set of data that can be evaluated by the obstacle processing algorithm to determine the existence and specifics of obstacles in the environment.
The combination of the scanning of the fibers by the nutating mirror and the movement of the oscillating mirror create the frame. Frames are acquired and processed by MilOWS at a target rate of 3Hz.
As was already mentioned, specific geometric considerations exist that must be known in order to model MilOWS as it scans a scene. The relationship between the α and β angles as a frame is acquired will now be clarified, as well as the relationship between the motor and transmission driving the oscillating mirror.
Three different coordinate systems can be referenced when describing the position of each laser ray: Array Referenced Cartesian coordinate system (ARC), Sensor Referenced Cartesian coordinate system (SRC), and Platform Referenced Cartesian coordinate system (PRC). These coordinate systems will be more fully described in Chapter 5.
When referencing the PRC coordinate system in its simplest representation (meaning PRC(x,y,z) = SRC (x,y,z) = ACS (c,a,b) – no rotations or translations occur between the three coordinate systems), the linear fiber array has an associated angle α that is responsible for the leftright nature of the produced scan, while the angle β produces the updown nature of the scan^{1}.
Drawing 6: Laser pulses (circles) along a sweep as the mirror angle and fiber numbers change
This mode arises due to a phenomenon called “optical crosstalk”. In an internal EADS document^{2} it was shown that when the beam section impacts vertically onto the laser beam outlet window, radiation overcouples onto the receiver through optical crosstalk in the fiber scanner. The optical output can consequentially reach the destruction threshold of the avalanche diode.
As a countermeasure, the decision was taken to resort the fibers in the linear array in order to interrupt the coupling path.
Drawing 7: Order in which the fibers are scanned as alpha changes.
MilOWS has a total of 128 fibers that are scanned in a sequential order, with fiber 128 being the reference fiber. As a result of this hardware modification, the fibers in the linear array are not scanned through in one direction, but rather first channels 1 to 64 from the lefthand edge to the image center, then channels 65 to 127 from the righthand edge to the image center, and finally the reference fiber. MilOWS by default uses this fiber reordering.
Fiber Number 
α Angle 
1 
21 
2 
20.67 
... 

64 
0 
65 
21 
66 
20.67 
... 

127 
0 
Table 1: Fiber number and corresponding alpha
Drawing 8: Orientation of fibers causing the sawtooth in the beta angle
It must also be noted that the fibers are arranged in such a way that a “sawtooth” pattern is introduced into the angle β.
The preceding illustration shows a small portion of the 128 fibers of the linear array from the viewpoint of the end (edge) of the array. This sawtooth pattern changes the β value for each range by value that alternates between 0.134° to +0.134° for each range measurement.
The average value of the distances measured in the reference channel is formed based on the mean value in the set of reference fiber values captured during a frame acquisition. After a frame is acquired and sorted by reference fiber counter value (range), the mean element is used as the reference. This is used rather than the mean of the set to eliminate the influence of outliers. The reference fiber information is used to determine optical efficiency.
MilOWS utilizes a motor encoder angle which is encoded into a value from 065535 (16bits). It accompanies every range measurement in the binary data stream.
The motor encoder angle φ must be translated into an angle that corresponds to the oscillating mirror angle β, as there is a mechanical transmission between the motor and the oscillating mirror. This can be done using the helper value e in these equations:
(4.1)
(4.2)
with the following defined variables:
φ Motor encoder angle
φ_{0 }Angel offset
a Motor lever length
b Length of the Rod
c Length of the lever at the mirror
d Axle distance btw. motor and mirror
γ_{0} Angle offset of mirror
The angle β is then
(4.3)
where the value k is an integer representing the pixel index value in the frame, and M_{AF } is a constant called the “Mirror Angle Factor” and is specified for MilOWS elsewhere. “β_{FC}” is the aforementioned fine correction value to be applied to each beta value due to the sawtooth effect of the linear fiber array.
Using the specified values of these variables for MilOWS, the mirror angle versus the motor angle can be plotted.
Illustration 5: Motor (gamma) versus Beta angle of the oscillating mirror
The areas of interest along this curve are only the range {18°...18°}, where the relationship between the motor encoder angle and the mirror β value is almost linear. When simulating the correct motor angle for each angle of β, the same equations must be used but solved for φ to determine the motor encoder angle at a specified β angle.
A small excursion into solving equation 4.1 for φ reveals that the solution will contain four rather complex roots to compute. These will not be shown here; suffice it to say that a simpler method lends itself to our use. To simplify computational time an approximation will be introduced – a polynomial of n^{th} degree to approximate this almost linear section.
An initial investigation shows that a linear approximation is not precise enough.
Illustration 6: Linear approximation of the region 18 degrees...+18 degrees
The error introduced by using a linear approximation leads us to try polynomial approximations of higher degrees. Because the computations will not be done by hand, an arbitrarily high order polynomial can be freely chosen.
(4.4)
The proper choice of coefficients gives us a polynomial with a negligible error. Using the solved coefficients^{3} of a high degree approximation essentially eliminates the error value.
The precise value for β at the beginning of each frame is well known, but does not necessarily coincide with the extremity values of the oscillating mirror sweep. With a target scan rate of 3 Hz, a time uncertainty is introduced to the beginning angle of β for each frame of
(4.5)
which translates to an angular uncertainty of
(4.6)
where the variable rand is a random number between 0.0 and 1.0 and freqLineGate is 477.43 Hz. This uncertainty is applied to the start β angle for each frame and is much larger than the magnitude of error introduced by the limitation of the motor angle encoder (eq. 4.7), so it must be considered.
(4.7)
MilOWS has an added capability of “looking into a turn” during flight to help identify obstacles in the flight path while turning, a feature the HELLAS did not have. With the sensor installed 90° rotated from a HELLAS installation, the β angle will be adjusted based on the aircraft attitude and velocity to scan the environment in the direction of the turn. Because of this, the β angle will at times exceed the {18°,+18} limits; in fact, the limits of the β angle are expanded to {30°,+30°}.
To reconstruct the location of a range measurement in space, several transformations are necessary to get from the ranged point in space to the coordinate system of the rotocraft. This can be done using Euler angles. Firstly a more detailed description will be given of the coordinate systems mentioned in the previous chapter.
ARC coordinates are relative to the array and oscillating mirror, as shown in the following illustration.
Drawing 9: Array referenced coordinates
This set of coordinates takes into consideration the mounting parameters of MilOWS inside the flight platform. This set of coordinates is essentially a rotation matrix applied to the ARC coordinates already mentioned. In the case that no rotation is necessary (i.e. ARC coordinates coincide with SRC coordinates), then the X direction of SRC coincide with the C axis of ARC coordinates, the Z axis of SRC corresponds to the B axis of ARC, and the Y axis of SRC coordinates would correspond to the A axis of ARC.
PRC coordinates are an additional rotation that depend on the coordinate system used by the platform that the sensor is mounted into. The following illustration shows the orientation of an imaginary sensor in an aircraft with PRC coordinates corresponding to SRC coordinates (meaning that there is no additional rotation between PRC and SRC coordinates) for a HELLAS sensor installation.
Drawing 10: A HELLAS sensor installation with PRC(X,Y,Z) = SRC(X,Y,Z)
An installation of the MilOWS sensor is rotated by 90° relative to a HELLAS mount. This effectively swaps the scan directions of the α and β angles; this is necessary so that the MilOWS feature can be implemented that will allow the sensor to exceed the normal β angle maximum and minimum values to allow a “look ahead” when the platform turns. This will allow obstacle detection in the direction that the aerial platform is headed rather than simply straight ahead.
Drawing 11: A MilOWS sensor installation with PRC(X,Y,Z) = SRC(X,Y,Z)
Applying the standard aircraft attitude and orientation measurements of pitch, yaw, and roll to our coordinate systems, each coordinate system can be transformed to another using Euler angles.
Using the aforementioned example, rotations around the axes would have the following correspondences:
 Rotation about the C axis – roll
 Rotation about the A axis – pitch
 Rotation about the B axis – yaw (heading)
A rotation using Euler angles can be stated as follows:
(5.1)
Since corresponds to the third fundamental rotation, the rotation matrix that follows is
(5.2)
Likewise, corresponds the second fundamental rotation.
(5.3)
And finally corresponds to the first fundamental rotation.
(5.4)
The resulting R matrix is the multiplication of all three matrices together.
(5.5)
To find the corresponding pitch, roll, and yaw angles, the above R is simplified and each of the elements of the matrix are referred to like so:
(5.6)
Using the above R the angles of interest can be found with
(5.7)
(5.8)
(5.9)
with k being an integer value.
These fundamental equations will be used to transform from one coordinate system to another when generating synthetic scenario data.
Aside from the geometric characteristics of objects in three dimensions, one of the most important aspects will be the method for calculating intersections between each laser pulse and the surface of an object. Because of the possibility of arbitrary placement of each synthetic object in a scene, a well defined formulaic method must be derived to calculate the intersection locations in a 3D coordinate system between each placed object and the vectors representing each laser pulse from the sensor.
Although it has already been mentioned that the sensor will in fact be moved along a given trajectory in space while each frame is being generated, for now this fact will be ignored and the assumption is made that the sensor is stationary so as to keep the explanation clear.
The method of ray tracing is a naturally fitting solution to the problem of calculating intersections between light rays and three dimensional objects. Ray tracing works by following the path of a ray of light in a scene and calculating the values of reflection, refraction, and absorption when ever it intersects with an object. Although here only a simple version of ray tracing will be illustrated using a sphere, the method is fundamentally the same for more complex shapes like quartic shapes.
First, the equation of a ray is used
(6.1)
where D is the direction of the ray, O is the origin of the ray, and t is a scalar representing the distance from the origin that our intersection will lie [Dra00].
Drawing 12: Ray intersection with a circle
(6.2)
with the introduction of X=OC, the following can be solved:
(6.3)
(6.4)
(6.5)
The result is a quadratic equation that solves for the following coefficients
(6.6)
using the standard method. Both roots are compared and if they are positive and real, then one or more intersections with the sphere exist. The larger of the two roots can be eliminated, since only the first value is of interest as our sphere object will be solid. When inserting a sphere into a synthetically generated frame, the result might resemble the following picture.
Illustration 7: Sphere intersetion in a frame with nonmoving origin
This analytical method works well for simple shapes, such as poles (cylinders), cones, toruses, and so on.
Cylinders are also of importance because they will be used to represent the supports at the end of hanging wires, and will also be used to simulate trees. The algorithm used in the near realtime classification of obstacles uses a particular rule to distinguish between “pole” objects and “tree” objects, which is based simply on the width to height ratio, separation from other objects, diameter, and height.
Therefore, cylinders will serve as both pole objects and tree objects when used in our synthetic scenarios.
Drawing 13: Raycylinder intersection
To find the intersection with the surface of a pole, the following must be defined:
C is the start cap point of the cylinder
O is the origin of the vector emitted from the sensor
D is the direction of the vector
V is a unit length vector that determines cylinder's axis
r is the cylinder's radius
L determines cylinder's end cap point
X =OC
For detection of an intersection with the surface of the cylinder, it is noticed that [Dra00]:
A = C + V*m (6.7)
(6.8)
PA = r (6.9)
where m is a scalar that determines the closest point on the axis to the hit point. The PA vector is perpendicular to V, what guarantees the closest distance to the axis. PA is the cylinder's radius.
After some analysis is done (as was shown in more detail in the case of a sphere, earlier in this chapter), the coefficients of this quadratic are:
(6.10)
which can be solved for the roots in the normal fashion to determine if an intersection occurs.
Now that two simple shapes have been examined, this idea must be expanded to cover a more complicated shape and with consideration given to the incidence angle with the object and the footprint of the laser beam cast on its surface. Ideally, one would want to take a parameterized equation for a wire in three dimensions and solve it analytically as done with the sphere and cylinder and calculate raywire intersections similarly.
A wire hanging under its own weight takes the shape of a curve called a catenary. The intrinsic equation of the shape of the catenary is the hyperbolic function
(6.11)
where the variable a is called the parameter of the curve.
Illustration 8: Catenaries with different values of the parameter of the curve
However, in reality a hanging wire can assume a different shape. If the supports on either end of the wire are not at equal heights or a minimum distance from the ground must be kept with the wire (as is often the case in the real world), a modified version of this equation must be utilized.
The parameters that are used as input to determine the shape of the catenary for our purposes are:
Separation of wire supports (Distance D)
Height of each wire support (H_{1}, H_{2})
Height from ground of the lowest point of the wire (S)
Drawing 14: Side view of hanging wire
Using “D” for distance, “S” for the sag (the distance from the minimum of the curve to the ground), the following equation must be solved for the parameter a :
(6.12)
Because of the lack of a closedform solution to this equation, the variable a must be resolved numerically using the values for H_{1}, H_{1}, D, and S. A simple rootfinding algorithm called the “bisection method” [WKR05] is used to converge to a root value. Essentially this is done by finding the value of a for which the equation intersects the yaxis (i.e. y=0).
(6.13)
Starting with two points a and b which bracket a root, and at every iteration, another subinterval is chosen [a, c] or [c, b], where
(6.14)
is the midpoint between a and b. This iteration proceeds until a value of a is found to satisfy the solution. This value will determine the shape of our catenary and thus our synthetic wire. Once this value is found, the results can be plotted.
Illustration 9: Hanging wire plot. H1=100m, H2=57m, S=45m, D=112m, and a solved for using the bisection method.
A parameterized catenary in three dimensions can also take the form of the following matrix:
(6.15)
The variable p is the parameter of the catenary, r is the radius of the wire, and a is varied between 0 to 2π to generate the wire surface.
Illustration 10: 1/2 of a Catenary Shape in 3D
Using the line segmentation approximation of the catenary, the curve is segmented into small sections along the xaxis using some constant value for segmentation.
Drawing 15: Segmentation of a hanging wire
The segments must be sufficiently small so that they each approximate a straight line along the catenary, but resemble the wire on the macro scale.
Illustration 11: Wire plot in 3D using UTM coordinates
After this segmentation, the whole set of segment vectors is rotated and translated into our coordinate system using our transformation matrices.
Next, each laser ray from MilOWS must be checked for an intersection with each segment of the catenary. This involves iterating over each segment of the catenary while checking for intersections with all of the laser vectors of a specified frame.
Drawing 16: Determining the shortest segment joining two segments in 3D
Each segment along the catenary curve must be tested for an intersection with each laser vector from MilOWS.
By letting the line P_{1}P_{2} be a segment on the catenary, and the line P_{3}P_{4} the segment representing the laser from the MilOWS with P_{3} being the origin of the sensor and P_{4} being the timeout value of the range, the location of the shortest segment connecting the two lines at points P_{a} and P_{b}_{ }can be determined. Using the magnitude of the segment P_{a}P_{b} and a comparison with respect to the diameter of the wire segment and the divergence of the laser beam (hence the laser footprint on the wire segment), it can be determined if an actual intersection between the laser beam and wire occurs [Bou98].
The equation for the locations of P_{a} and P_{b} are:
(7.1)
(7.2)
The shortest line segment between the two lines will be perpendicular to the two lines. Using the relationships
(7.3)
(7.4)
and expanding them with the given line equations and into Cartesian x,y,z coordinates gives us the following equations:
(7.5)
(7.6)
where
(7.7)
Once P_{a}P_{b}has been determined to fall within the combined radii of the divergence of the laser beam and the wire itself, the next step is the calculation of the amount of energy reflected to the sensor from the wire.
Without considering atmospheric attenuation of the transmitted laser energy, the amount of energy returned to the sensor is a function of the output energy of the laser itself, the incident angle with the catenary segment, the albedo of the wire, and the area of the wire that intersects with the laser footprint. The return energy calculation is of importance because the photodiode has a minimum threshold level that must be exceeded for a return pulse to register as a valid range measurement.
It is assumed that the footprint of the laser is larger than the radius of the wire at the point of intersection. Another assumption is made regarding the shape of area of intersection between the laser and the wire – namely that it consists of a shape where the magnitude of one dimension (the width of the chord of intersection, for example) is much larger than the magnitude of the second dimension. This suggests that it is not necessary to model the shape of the footprint on the wire other than as the intersected area of a circle and rectangular wire shape.
This reduces our footprint calculation to two cases.
Case 1
Drawing 17: Intersection of laser footprint with a section of the wire
(7.8)
This type of intersection can be detected when the following holds true:
Case 2
This type of intersection can be detected when
hold true.
In this case, the wire is completely intersected by the diverging laser beam. The area is computed as a rectangle with length c and width 2R.
(7.9)
(7.10)
A normal laser beam and wire intersection occurs when a single catenary segment is found to come within the distance of the magnitude of the line P_{a}P_{b} when taking into account the radius of the wire and the radius of the beam divergence at the detected distance of the wire. However, two additional exceptions to the normal type of wire intersection with each laser pulse exist.
The first case occurs when two catenary segments are close enough to be intersected with the diverging laser beam. In this case, the catenary segment that results in the higher energy calculation value will be considered the “real” wire return. This situation is illustrated in the following diagram.
Drawing 19: Laser ray passing close to two adjacent segments of the catenary
An alternative approach to this would be to consider the energy returned by each wire segment based only on the percentage of overlap of the diverging beam footprint per segment. This will not be necessary, however.
For now, the quantities that give differing energy values between the two segments can be considered as being related to the cosine of the segment normal with the laser beam (from the technique called “Lambertian Shading”, as shown later) and the beam divergence. However, it can be assumed that these differences will be negligible since they are very small values over the distances involved (for example, a 1cm wire at 1 km that is perfectly bisected by a laser beam will produce an overlap of 1.15cm (maximum) of footprint on each catenary segment), and the energy value that is higher of the two will be used with this type of intersection.
The second exception occurs when multiple catenary segments are intersected but are not adjacent to one another. Such a possibility could occur when the laser vectors from the sensor are at very small angles with the catenary segments.
Drawing 20: Two intersections of a single ray
The 1^{st} return energy pulse is above the minimum threshold of the photodiode, but the second is not.
The 2^{nd} return energy pulse is above the minimum threshold of the photodiode, but the first is not.
Both are above the minimum threshold of detection of the photodiode.
In order to calculate the energy return of the second intersection, one must know the energy of the first (closer) intersection and subtract that from the total energy available to the second segment intersection. Then some calculations must be done to project the shadow of the first wire onto the second segment of intersection (which may or may not partially cover the second segment). Because this exceptional type of intersection is rare compared to the other cases, it will be ignored for now and only the laser return of higher energy will be used. Also, the MilOWS sensor itself would be in a very unusual orientation for such an event to occur.
After the area of the footprint is calculated, the incident angle between the laser and catenary segment can be calculated using
(7.11)
A normal method for calculating the scattered light back towards the light source is done using the “Lambertian Shading” technique. This simple method uses the cosine value of the angle between the normal to the catenary segment and the laser to determine the percentage of laser power reflected back to the sensor. Thus, a 0° incident angle would produce a value of 100%, and at 90° would be 0%, with varying values corresponding to the cosine in between.
Illustration 12: Reflectivity vs Angle of Incidence
Using the shown values in the previous plot, a polynomial describing the curve between the angles of 0° and 45° can be created which will be followed by a simple linear interpolation for values between 45° and 90°.
Illustration 13: Cubic vs. 4th degree polynomial, with “data1”
It should be noted that using a 4^{th} degree polynomial to describe the curve shows similar residuals as a cubic polynomial, but exhibits more desirable behavior near an incidence angle of 0° (the cubic polynomial overshoots the 40% value). Thus the nonunique equation that will be used to describe the reflectivity value as a function of angle incidence with the normal to the catenary segment being analyzed is
(7.12)
The total energy returned to the sensor as a function of the output energy, without correcting for atmospheric attenuation of the signal, is:
(7.13)
with a_{fp} being the footprint, a_{ls} the total area of the cross section of the diverging laser beam at the location of intersection with the wire, A the reflectivity, ξ_{tx} the transmit energy, and ξ_{tx} the receive energy.
A nominal value of 40% is used for wire reflectivity when simulating singlestrand hanging wires.
Drawing 21: Wire support pylon
A typical “transmission tower”, or “pylon support” with multiple wires is pictured above, with dimensions in meters. The wires supported by this construction typically have a diameter between 16mm and 21mm. This type of construction would consist of seven separate catenaries with pole objects as the supports on each of the ends of the group of cables. This is just one example of a typical pylon; other configurations with differing dimensions exist.
The propagation of light through the atmosphere is affected by essentially two processes: absorption and scattering. Absorption occurs when some amount of photons are absorbed by constituents in the atmosphere and reemitted at different wavelengths. Scattering occurs when light energy has its direction altered by a diffusion of radiation by small particles in the atmosphere.
When aerosol particles approach the size of the wavelength of the laser (1.5 microns with MilOWS), as is the case in thick fog, the optical transmission can be heavily impacted. Aside from direct absorption by the water vapor particles in the air, a large amount of scatter also occurs. This is a result of the aerosol particles being similar in size as the wavelength of the laser light, thus being subject to a type of scatter called “Mie Scattering”. The average size of mist droplets is 1015 microns, but can range anywhere between 1 and 100 microns in diameter.
Illustration 14: Mie scattering simulation using 12μm gaussian distribution with white (sunlight) point source generated with IRIS [Cow05]
The net effect in a fog of uniform density with a Gaussian particle size distribution is that the energy of the laser pulse is reduced when propagating towards the target and upon the return trip to the receiver by some constant value. For our purposes, the microscopic effects of scattering and absorption need not be modeled; it is good enough to use a total percent value for signal attenuation in a macroscopic approximation.
As of the writing of this paper, no data has been made available from MilOWS. However, existing data sets have been recorded using the HELLAS system and will be useful in generating the synthetic data sets for the MilOWS sensor.
Most importantly, recorded trajectory data can be used as a trajectory for a simulated MilOWS sensor, since the characteristics of the platform are accurately recorded therein. Using an existing trajectory mitigates the need to accurately model a platform that the sensor is to be installed in.
The actual sensor data can also be used in conjunction with the trajectory information to merge an existing data set with a synthetic scene. This will be discussed more later in Chapter 10.
Before the trajectory data can be used, it is imperative to understand the shortcomings associated with using the recorded attitude and location values therein. The first subject to be investigated will be the performance of IRS systems in general.
The characteristics of IRS systems vary by manufacturer and are not always well known. In the case of one of the IRS systems used by the HELLAS, the performance statistics associated with it can be found in the following table [ESI04].

IRS used in HELLAS 

variable 
sigma 
bias 
heading 
0.20° 
0.02° 
roll/pitch 
0.16° 
0.05° 
velocity 
0.23 m/s 
0.08 m/s 
Table 2: Performance of IRS used with HELLAS
The errors associated with the heading, pitch, roll, and velocity are assumed to be Gaussian in nature.
Illustration 15: 10000 Heading measurements with noise added
The pitch and roll use the same sigma value, so only one plot is shown (below).
Illustration 16: 10000 Pitch measurements with noise added
While the errors are small, they must be kept in mind when existing trajectory data is to be used as a basis for synthetic data creation.
Illustration 17: 10000 Measurements of pitch, roll, and heading with noise added
The smoothing of this data set was done with a socalled “SavitzkyGolay” filter, also called a digital smoothing polynomial filter The filter coefficients are derived by performing an unweighted linear squares least fit using a polynomial of a given degree and a specified span of data point. The SG filter preserves the high frequency portion of a signal, whereas a normal movingaverage filter tends to filter out a significant portion of a signal's high frequency content.
Illustration 18: Plot of smoothed heading data (in blue) over original data (in red)
The events that must occur before synthetic frames can be generated and in turn intersected with synthetic objects are briefly as follows.
Of an existing trajectory, some arbitrary segment is selected. The time at the beginning point of the selected segment trajectory is called the “start time” and is assigned to the first pixel of the first frame.
The vectors of the frame are created with consideration only of the timeout value of the counter (end of range value). This will result in a “blank frame”. Because the sensor is moving during frame acquisition, the origin of each laser vector will be displaced from its neighboring laser vectors based on the trajectory of the aircraft and alpha and beta angles. The pitch, roll, and heading values will also differ between neighboring vectors, altering the direction of each sensortarget laser vector.
At any given time along the trajectory, the trajectory data is further interpolated using a cubicspline interpolation routine for values of position (latitude and longitude), as well as orientation (pitch, roll, and heading) for each laser vector. An interpolated time is also created for each vector.
After the completion of a frame, the vectors are saved for later use with intersection routines of the synthetic objects.
The sensor is displaced along the trajectory by the proper amount of time (and thus has a new orientation at the start of the next frame).
Drawing 22: Translation and rotation of sensor along a given trajectory
As a single frame is being generated, the location of the endpoint of each vector (P1, P2, P3...Pn) is
(9.1),(9.2),(9.3)
with the value maxDist being the maximum distance corresponding to the timeout value of the distance counter. For MilOWS, this distance is
(9.4)
When the empty frame is created, each vector will have a magnitude of this value.
The HELLAS and MilOWS each have a native data format that will be described here to illustrate the differences between the two.
The HELLAS system uses data words that are 4bytes (32bits) in length. The byteorder is little endian on both the HELLAS and MilOWS sensors. The tables below illustrate the bit fields for the HELLAS system.
31 
30 
29 
28 
27 
25 
25 
24 
23 
22 
21 
20 
19 
18 
17 
16 
Select 
ZT 
Time tag with 14 bit length 
15 
14 
13 
12 
11 
10 
09 
08 
07 
06 
05 
04 
03 
02 
01 
00 
Ain 
NI 
MovDir 
BT 
Counting result with 12bit length 
Table 3: HELLAS data word layout
Each data word recorded by the HELLAS system has the following fields, which are described briefly.
Counter
Bits 011 consists of a 12bit counter value. This counter value provides the time of flight (range) information in the form of a number of clock ticks with a weight per clock tick associated with it.
BT
The flag bit “BT” is called the “frame gate flag” and signals the end of the acquisition of a frame. Its value is normally set to one, but is set to zero for the last pixel of a given frame.
MovDir
The 'MovDir' flag indicates the direction of movement of the oscillating mirror. If MovDir is set to zero, then the variation of angle β is positive. If set to one, then the variation of the angle β is negative.
NI
This is a reference signal that is made by the encoder of the oscillating mirror. It is used to determine the pixel at which the angle β =0.
Ain
This is a periodic signal that is used to determine the angle β for each data word.
Timetag
This 14bit value contains a cyclic running counter value that gives a time in units of 0.5 milliseconds associated with the data word.
ZT
The ZT flag is called the “line gate” flag and is set from one to zero when the end of a line is reached (i.e. the last pixel in the linear fiber array has been scanned by the laser).
Select
When processing a set of recorded HELLAS data, the first data word is treated differently. This socalled “header word” can contain information on the angle setting of the oscillating mirror immediately (20 µs) before the first optical transmit pulse is emitted. The select bit being set to one indicates that an angle is being read instead of a counting value.
The MilOWS binary data words are different than the HELLAS data format in several ways, most notably with a longer data word of 8 bytes (64 bits).
63 
62 
61 
60 
59 
58 
57 
56 
55 
54 
53 
52 
51 
50 
49 
48 
Timetag (16 Bits, unsigned integer, cyclic) 
47 
46 
45 
44 
43 
42 
41 
40 
39 
38 
37 
36 
35 
34 
33 
32 
Motor angle in encoder values (16 Bits, signed integer) 
31 
30 
29 
28 
27 
25 
25 
24 
23 
22 
21 
20 
19 
18 
17 
16 
FG 
LG 
Crosstalk 
Counter value of the second counter (12 Bit, unsigned integer) 
15 
14 
13 
12 
11 
10 
09 
08 
07 
06 
05 
04 
03 
02 
01 
00 
(reserved for LUT) 
(reserved for Trigger) 
Counter value of the first counter (12 Bit, unsigned integer) 
Table 4: MilOWS data word layout
Aside from the increased number of bits for the timetag, it can be noted that there are now two counter values in the data word, as well as a new “motor angle” value in which the angle of the motor driving the oscillating mirror is now directly encoded.
Counter 1 & 2
Drawing 23: Multiple returns when an atmospheric anomaly and target are present
Because LIDARs are generally sensitive to aerosols and cloud particles, unwanted behavior can occur when operating under nonoptimal conditions.
MilOWS is able to capture both the first and second returns during a single range measurement. Currently, only the first return is being used until further study is done into the characterization of second returns.
Keeping in mind the aforementioned scenario, a firstreturn based sensor (like the HELLAS) is limited to detecting only the atmospheric anomaly and not the desired target. With the ability to detect multiple returns, the desired result is that MilOWS sensor will be able to “see through” some kinds of thin clouds (fog) and other kinds of interference.
Trigger
This field remains empty during synthetic data creation and is to be used when data analysis is done to place various flags.
LUT
This field is used to store alternative values associated with the color look up table used when data analysis is being done. This field is empty during synthetic data generation.
Crosstalk
If optical crosstalk is detected by the MilOWS electronics, then this flag will be set. When being simulated, this flag is always set to zero.
FG and LG
The flags “FG” and “LG” are similar to the flags “BT” and “ZT” on the HELLAS system – they mark the end of both a frame (by identifying the last pixel in a frame), and the end of each scan line (by marking the last pixel of each scanned line).
Reference Amplitude
The amplitude of the reference fiber pulse, which is used to check the optical efficiency.
Motor Encoder Angle
Unlike the HELLAS system in which the position of the oscillating mirror must be inferred, the position of the motor is encoded directly into a 16bit value that must then be used in the calculation of the β angle of the oscillating mirror. This is done using equations 4.1, 4.2, and 4.3 from Chapter 4.
Timetag
The timetag field is a 16bit value that is used to store the cyclic counter value associated with each data word. Each clock tick of the counter has a conversion that is used to translate clock ticks into time in seconds:
(10.1)
ARINC429 (Aeronautical Radio, Incorporated) is a specification which defines how avionics equipment and systems should communicate with each other. Equipment utilizing ARINC429 are connected by wires in twisted pairs and are transmitted at either 12.5 or 100 kilobits per second. Transmission and reception are on separate ports, thus increasing the reliability of signal at the cost of added wire weight [Con00].
ARINC429 data words are always 32bits in length in either Binary Coded Decimal (BCD), two's complement binary notation (BNR), Discrete Data, Maintenance Data and Acknowledgement, or ISO Alphabet #5 Character Data. Each data word typically consists of partiy, SSM, Data, SDI, and Label fields.
32 
31 
30 
29 11 
10 
9 
8 1 
P 
SSM 
Data 
SDI 
Label 
Table 5: ARINC429 data word
Parity – Odd parity is normally used at the MSB location of the data word.
SSM – Sign/Status Matrix which indicates hardware equipment condition, operational mode, or validity of data.
Data – The actual data which can be in a number of formats.
SDI – Source/Destination Identifier which is used to identify the receiver of the data word, or the source of transmission.
Label – These bits identify the data type and the parameters associated with it.
The label of the data word is always transmitted on the bus first, so that receivers can identify the data word as being relevant or not and translate the data contained in the word.
Using this type of messaging, the navigation system on board the aircraft interfaces with MilOWS and provide the labels of interest shown in the following table [ESA03].
Label 
Description 
Average Rate 
HELLAS ARINC429 Input # 
SSM Definition 





320 
Magnetic Heading 
16,7 msec 
2 
ARINC 
324 
Pitch 
16,7 msec 
2 
ARINC 
325 
Roll 
16,7 msec 
2 
ARINC 
050 
North Speed 
50 msec 
1 
Private 
051 
East Speed 
50 msec 
1 
Private 
052 
Vertical Speed (up) 
50 msec 
1 
Private 
124 
Magnetic Variation 
500 msec 
1 
ARINC 
040 
Height (WGS84 Height) 
500 msec 
1 
ARINC 
057 
Latitude 
500 msec 
1 
Private 
054 
Fine Latitude 
500 msec 
1 
Private 
167 
Longitude 
500 msec 
1 
Private 
055 
Fine Longitude 
500 msec 
1 
Private 
216 
GPS Time hhmm 
500 msec 
1 
ARINC 
214 
GPS Time sshs 
500 msec 
1 
ARINC 
Table 6: Data words of interest on the ARINC429 bus for HELLAS [ESA03]. MilOWS data words may differ.
The binary format of the navigation data recorded by the sensor while operating is quite complicated^{4} to dissect, as it is embedded within several other records that are not of interest in this context. Instead of describing the native format of the navigation data, a simpler subset of data was used after being exported from the program “HellSim”. The data exported from HellSim is in ASCII format, and is organized like so
Pic # 
TimeTag 
Heading 
Pitch 
Roll 
XVel 
YVel 
ZVel 
NVel 
EVel 
DVel 
Veloc 
Latitude 
Long. 
Altitude 
3 
5529 
103.2374 
0.45212 
1.09571 
46.08434 
2.06451 
1.41685 
12.605 
44.37 
1.76 
46.16 
48.68809 
10.84468 
1042.5143 
Table 7: Example output from HellSim
Each navigation record has an associated timetag with it that can be correlated with the records in an accompanying sensor data set. However, it must be noted that the time tags for each navigation record and each pixel word in the sensor data do not necessarily align (and in fact, they will not align for the majority of the records).
It should already be clear that to find the location in space for any given pixel, the orientation of the platform must be known for the time of the acquisition of the given pixel. Because the navigation data rate is much lower than that of pixel acquisition, some form of interpolation must be used to extract values of pitch, roll, heading, latitude, longitude, and altitude to assign to a given pixel.
This will become important when an existing trajectory is to be used for synthetic data creation. After either 1) a specific path or 2) a time period is specified along the trajectory to be used, the synthetically generated pixels and frames must first generate a time value for each pixel. Using this time value, the interpolated values for the attitude and location of the platform at that position can be determined.
The type of interpolation used for the interpolation of attitude and location of each pixel along an existing trajectory is called a “Cubic Spline Interpolation”. The goal of a cubic spline interpolation is to get an interpolation formula that is smooth in the first derivative and continuous in the second derivative at its boundaries and within its interval. Cubic spline interpolation uses lowdegree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. Spline interpolation incurs a smaller error than linear interpolation and the interpolant is smoother, and the interpolant is relatively easy to evaluate.
Illustration 19: Intepolation of points
Interpolation with a cubic spline can be defined as
(10.2)
with
(10.3)
The implementation used in the generation of synthetic scenarios is an adaptation from [NRC02].
Before considering the placement of objects in a synthetic scenario, the coordinate systems that will be used for object placement must briefly be examined.
While the trajectory data imported relates position in coordinates of latitude and longitude, it is desirable to use a map projection in which the coordinates all have a 1:1 relationship. This becomes apparent when a complicated object – such as a catenary shape consisting of many small segments – must be placed in a coordinate system and intersections must be calculated with the generated laser vectors within a frame. When using latitude and longitude, the longitude value varies depending on the latitude at which the coordinate is fixed: a degree of longitude corresponds to about 111km times the cosine of the latitude of the coordinate. So in this context, when calculating the intersection of a laser vector with a single segment of a catenary, it would have to be taken into account along the length of the catenary segment (as each end of the segment would occupy a slightly different longitude) and would overall make the computations more sophisticated.
A transverse Mercator projection is a map projection of Earth on a tangent cylinder by rays radial with respect to the cylinder. In a transverse Mercator projection, the cylinder is tangent at some meridian rather than at the equator, as is the case with a normal “Mercator Projection”.
Illustration 20: UTM projection.
The coordinate system, “Universal Transverse Mercator Projection” (UTM) is a geographic coordinate system that is an alternative to using longitudes and latitudes with the advantage that the quantities are in meters, as opposed to degrees/minutes/seconds of latitude and longitude [WKU05].
The Earth is divided into 60 zones limited by meridians, each spanning six degrees of longitude. Each zone is then projected using the transverse Mercator with the central meridian of the zone as the tangent meridian.
The Universal Transverse Mercator is convenient because no point is far from the center meridian of its zone, so that distortions inside zones are small. However, this is achieved at the cost of discontinuity when coordinates in overlapping multiple zones are used.
Some math is required to convert lat/lon coordinates to UTM, as well as the proper selection of the appropriate datum. The code required to do so was largely derived from a set of C routines [Nak00]. The following variables will be used to do the conversion:
lat  the latitude of the coordinates
lon  the longitude of the coordinates
long_{0} = central meridian of zone
k_{0} = scale along long_{0}= 0.9996
e = approximately. This is the eccentricity of the earth's elliptical crosssection.
^{ }= = = 0.007 approximately. The quantity only occurs in even powers so it need only be calculated as .
This is the radius of curvature of the earth in the meridian plane.
This is the radius of curvature of the earth perpendicular to the meridian plane. It is also the distance from the point in question to the polar axis, measured perpendicular to the earth's surface.
= sine of one second of arc.
S is the meridional arc through the point in question (the distance along the earth's surface from the equator). All angles are in radians.
(11.1)
where lat is in radians and
(11.2)
(11.3)
(11.4)
(11.5)
(11.6)
To now get from latitude/longitude to UTM coordinates, the following must be used (all angles are in radians):
(11.7)
where
(11.8)
(11.9)
(11.10)
and
(11.11)
(11.12)
(11.13)
Easting x is relative to the central meridian (for conventional UTM easting add 500,000 meters to x) [Dut05].
Drawing 24: Sensor located along the trajectory
The preceding figure illustrates the sensor as it scans out a single frame while it lies at a given UTM coordinate along the trajectory. When a particular range of the trajectory is specified for data generation, each frame is first checked using simple geometry to determine if the synthetic object lies within the view of the sensor. This reduces the computation time by eliminating the need to check every laser vector of every frame, and reduces it to only the vectors contained within frames that are within range of the particular object. Note that this can be easily done now that the trajectory and position of the synthetic object have been converted to UTM coordinates. If the value for R_{s } is less than the maximum detection distance of the MilOWS sensor, then that frame is flagged for further processing.
(11.14)
with
(11.15)
(11.16)
(11.17)
(11.18)
all in UTM coordinates. After this step, each frame and the corresponding pixels it contains are further checked for intersections with the objects in the scene.
In order to simulate the MilOWS sensor characteristics and create data that emulates that which the MilOWS would generate in a given scenario, the goal was set to write software to generate these socalled “Synthetic Scenarios”. The software, named “owsScenarios” (owsSc) must be able to simulate the sensor as it scans a synthetic object while moving through a 3dimensional coordinate system along an imported trajectory. The frames generated by owsSc must be the same as what would be generated by the actual hardware. This data will be used for algorithm testing and help with the development of MilOWS.
The implementation was done in the C++ language on a standard PC platform using a Microsoft Windows operating system. The QT Toolkit^{5} is used for the user interface while the underlying classes that handle the sensor related data and algorithms are not platform dependent and should therefore work on any major platform supporting QT.
The class diagrams can be found in Appendix A, with the source code documentation following in Appendix B and the source code itself available in electronic form with the accompanying digital media.
The classes relevant to the generation of synthetic scenarios are listed with a brief description of each.
Class Name 
Description 
point 
A point object located in a 3D coordinate system 
framePoint 
Container for a set of points located within a single frame. Includes related sensor angles, fiber index, counter value, time, and motor position. 
frameCoords 
A container for a set of framePoints within a single frame instance. 
frame 
A set of frame coordinates that comprise a single scan by a sensor; includes spline routines and empty frame generation. 
frameSet 
A set of frames along a given trajectory between times t1 and t2. A frameset is aware of the sensor geometry and settings, as well as the imported trajectory data. 
navRecord 
A single record of navigation data: attitude, position, time, and related velocities. 
navData 
A container for all navRecords in an imported navigation data file. 
vector 
A mathematic vector class with associated operations and intersection test function. 
catCoords 
A container for a set of vectors that form a catenary wire shape. 
catenary 
A synthetic object class that containes the parameters of a catenary, the set of catCoords vectors that create the actual shape, intersection routine with a given frameSet, and translation and rotation functions. 
septWire 
A wire object composed of seven catenaries. 
triWire 
A wire object composed of three wires. 
cylinder 
A synthetic cylinder object including the parameters of the cylinder and its intersection routine with a frameSet. 
synthObjContainer 
A container class capable of holding any number and combination of synthetic objects. 
hellasData 
A class that contains data imported from existing HELLAS scenarios for merging with a synthetic scenario. 
Ellipsoid 
A class that handles conversion from UTM to the standard Geographic Coordinate System and back. 
Table 8: Relevant classes in the owsScenarios application
The classes mentioned above are the most important classes used for the generation of a synthetic scenario.
Flowchart 1: Example program flow with a single synthetic object (catenary)
Some example code will illustrate the usage of the classes as would be done in a typical application.
To generate a synthetic scene, the first step is to import an existing trajectory. Without a trajectory it is only possible to create a static scene with no sensor platform movement.
navData *pNav = new navData;
pNav>setTickWeight( NAV_TIMECTR_WEIGHT );
pNav>loadAscFile(“filename.txt”);
The navigation class must know the time per tick of the counter associated with it and is set by setTickWeight. Data can be imported from HELLAS formatted data, which uses a different counter for timestamping navigation messages than the MilOWS does. The pNav class does not smooth the data; this must be done externally by the user.
After the trajectory data is extracted from an existing scenario (a recorded HELLAS flight, for example) and then smoothed, it can then be imported by the pNav class. In the owsSc program this data is plotted in the user interface.
Illustration 21: Tajectory plot with longitude along the X axis and latitude along the Y axis.
When the trajectory is imported,. the first record is assigned the time value of “0” and then each additional record is assigned a time value based on it's own timestamp relative to this first record.
Next, the user must select a segment of the trajectory that is of interest. Starting from the time assigned to the navigation record at the beginning of the selected area, “empty” frames are generated using the sensor configuration and orientation. Theses empty frames consist of sets of vectors that have an origin at the position of the sensor during the time of the laser firing, and an end point of the vector at a location in space corresponding to the timeout value of the counter of the system. The origin of each vector must be assigned a time based on linear interpolation, and then the position and orientation interpolated using the cubic spline interpolation algorithm.
frameSet pFs = new frameSet;
pFs>addFrame( t1, t2, pNav, betaDir );
The variables t1 and t2 are a begin time and end time of a frame acquisition. The duration of the difference between the two determines the time between each pixel acquisition, and therefore the rate at which the angles α and β change. The nav pointer gives the frameSet class access to the navigation records so that the sensor position can be interpolated properly along a moving trajectory. betaDir is a boolean flag that determines the direction of the oscillating mirror.
After the positions of all of the sensor vectors are calculated, they are saved in memory as a part of a set of several instances of the frame class that are kept in a frameSet. All of the coordinates are converted to UTM and are used as such for calculations. To create several frames in a frameSet, one must only loop over the addFrame function as many times as required.
Now that the empty frames have been created, objects can be inserted along the trajectory. When an object is created, it's UTM coordinates and height above the ellipsoid are recorded as well as its rotation parameters.
SynthObjContainer *sContainer = new SynthObjContainer;
catenary *pCat = new catenary;
wireParams *pWp = new wireParams;
// Set the wire parameters here
cat>getParams( pWp );
point *pOrigin = new point(0,0,0);
point *pTarget = new point(0,0,0);
pTarget>x = x;
pTarget>y = y;
pTarget>z = z;
pcat>segment( pOrigin );
pcat>translate( pTarget, convFlag );
sContainer>addObject( pcat );
The SynthObjContainer class is used to contain synthetic objects. The classes implemented that are currently synthetic objects are catenary, septWire, triWire, and cylinder. After the container is created, a catenary instance pCat is made along with an instance pWp of the class wireParams. The parameters of the catenary must be specified in pWp and passed to pCat before the creation of the catenary is finished.
Two points, pOrigin and pTarget are created to represent the origin of the synthetic object and the target location. Normally, the origin will be (0,0,0) – the zero reference location. The next step is to take the mathematical catenary shape, the hyperbolic cosine, and create segments that approximate it with the catenary function segment. The size of these segments is one of the parameters in the pWp instance. Each segment has its own pair of UTM coordinates, with the lowest part of the sag of the catenary touching the point pOrigin.
Next the catenary is translated to a target UTM coordinate, with the conversion flag specifying whether the target is in latitude/longitude or UTM coordinates. The class can optionally be rotated about its X, Y, and Z axes also to realize any particular orientation. Finally, the synthetic object is placed into the container sContainer.
To generate the scenario data, each vector of each frame along the trajectory (in the frameSet instance) must be checked for intersection with each synthetic object. Each synthetic object does it's own intersection calculations with the frameSet. When an intersection is found, that particular vector in the frame being checked is shortened to the proper location of intersection with the surface of the object
Drawing 25: Shortening of a laser vector when an intersection is detected
In our catenary example, this implies that the intersection routine checking for intersections must check every segment that the catenary is composed of with every vector in every frame instance. Keeping the the previous example code snippets in mind, these calculations would be done with the following code:
sContainer>getAllIntersections(pFs);
The results are computationally intensive and highly consuming of CPU cycles. Each object inside the sContainer is checked for intersections with all of the frames within the set of frames in pFs. The results are exported to a binary file format that is identical to the format described in the MilOWS specifications:
pFs>outputD64Milows( filename, 1 );
For more information on the catenary, synthObjContainer, frameSet, navData, point and wireParams classes, please see Appendix B.
This binary data can then be displayed by a second tool, HellSim2, developed by EADS. The predecessor, HellSim, was used to display data from the HELLAS sensor as recorded during testing.
Illustration 22: A HELLAS scene displayed in HellSim
The previous graphic shows a scene recorded by a HELLAS sensor and replayed with HellSim. Likewise, a set of synthetic data generated with owsSc can be viewed with HellSim2 using the native MilOWS data format.
Illustration 23: A synthetic wire displayed by HellSim2
Illustration 24: A pylontype configuration with seven wires displayed by HellSim2
The aim of this project was to analyze MilOWS and generate synthetic scenarios based on the operation and characteristics of the sensor. As a sensor, MilOWS is a very complex system combining optics, mechanics, electronics, interfacing, realtime data processing and techniques drawn from many different fields of science.
A solution was presented for the generation of synthetic data for MilOWS, while considering the characteristics of the sensor and the possibility of utilizing existing data.
Due to the fact that they both share a common heritage, the first step was the analysis of the predecessor to MilOWS, HELLAS. This was advantageous to do because of the existence of more documentation and a longer operational history. This involved an accurate analysis of the mechanics of the sensor, as well as some of the electronic properties.
The following step applied this acquired knowledge of HELLAS to MilOWS, while realizing the differences in hardware and operation. MilOWS native data output was also simulated in the same manner as the HELLAS.
Based on experience with the two systems, a plan was drawn up to simulate the sensor in software while preserving the characteristics of a real flight by utilizing existing recorded data from a HELLAS sensor. This method allowed the generation of accurate scenarios for use with testing the detection algorithms and the new version of raw data display software, HellSim2.
It is hoped that the creation of the owsSc program will aid in future testing of MilOWS. Future expansion should also be possible with some changes in the external configuration files^{6} accompanying owsSc when it becomes necessary to modify MilOWS.
All in all, this project introduced the possibility of saving resources by avoiding data acquisition on a real flight platform, with the additional advantage of earlier testing of different aspects of the project. These results could be useful in aiding the development of the MilOWS and perhaps even future iterations of the sensor.
[NRC02] W. H. Press et al. Numerical Recipes in C, The Art of Scientific Computing. Cambridge: Cambridge Press, 2002.
[NA00] F.D. Harris et al. U.S. Civil Rotocraft Accidents, 1963 Through 1997. NASA/TM 2000209597, 2000.
[Nak00] K. Nakamura. Latitude Longitude to UTM Conversions. 22 June 2000. GPSy Macintosh GPS Communications. 13 March 2005 <http://www.gpsy.com/gpsinfo/geotoutm/>.
[WKU05] Universal Transverse Mercator coordinate system. Wikipedia. May 2005. Wikimedia Foundation, Inc. 12 May 2005 <http://en.wikipedia.org/ wiki/Universal_ Transverse_ Mercator_ coordinate_system>.
[Dut05] S. Dutch. Converting UTM to Latitude and Longitude (or ViceVersa). 19 May 2005. University of Wisconsin Green Bay. 20 May 2005 <http://www.uwgb.edu/dutchs/UsefulData/UTMFormulas.HTM>.
[WKR05] Root Finding Algorithm. Wikipedia. April 2005. Wikimedia Foundation, Inc. 22 April 2005 <http://en.wikipedia.org/wiki/Rootfinding_algorithm>.
[WKT05] Ray Tracing. Wikipedia. April 2005. Wikimedia Foundation, Inc. 22 April 2005 <http://en.wikipedia.org/wiki/Ray_tracing>
[WKC05] Catenary. Wikipedia. April 2005. Wikimedia Foundation, Inc. 22 April 2005 <http://en.wikipedia.org/wiki/Catenary>
[Bou98] P. Bourke. The Shortest Line Between Two Lines in 3D. April 1998. Swinburne University Center For Astrophysics and Supercomputing. 11 April 2005 <http://astronomy.swin.edu.au/~pbourke/geometry/lineline3d/>.
[Dra00] K. Dragan. Metody śledzenia promieni w syntezie realistycznych obrazów cyfrowych. Politechnika Wrocławska Wydział Elektroniki, pages 1221, 2000.
[Cla05] J. Claeys. Curves in a Plane. 18 April 2004. Johan Claeys. 11 April 2005 <http://www.ping.be/~ping1339/curves.htm#Catenary>.
[Gol02] M. Gold. Ray Tracing in C and Net. 26 Nov 2002. iEntry, Inc. 14 April 2005 <http://www.devnewz.com/devnewz320021126RayTracingin CandNET.html>.
[CON00] Condor Engineering Corp.ARINC Protocol Tutorial 16001000027. 07 June 2000.
[Cow05] L. Cowley. Iris Software. Feb. 2005. Atmospheric Optics. Jun 8 2005 <http://www.sundog.clara.co.uk/droplets/iris.htm>.
[ESA03] EADS Dornier.ARINC429 Interface Control Document (ICD). EADS IS5097000000A/02, 10 April 2003.
[ESF01] EADS Dornier. File Format of the Sensor Description File. EADS TN 5097 520000A/07, 30 August 2001.
[ESM05] EADS. MilOWS Parameter für das interne STTE. EADS TN5341 610000A/01, 10 January 2005.
[ESM00] EADS Dornier. Definition of the Messages between HELLAS and Test Unit. EADS TN 5097520000A/08, 28 March 2000.
[ESI04] EADS Dornier. Influence of IRS Errors on Detection Performance. EADS TN5341000000A/16, 10 March 2003.
[EST00] EADS Dornier. Transformation of the Sensor Raw Data in Cartesian Coordinates. EADS TN 5097520000A/09, 12 April 2000.
[ESV05] EADS. Verarbeitung der Sensorrohdaten im STTE. EADS TN5341 610000A/02, 11 Jan. 2005.
Table 4.1
The coefficients of the polynomial used to get motor position as a function of the mirror β angle.
Coeff. 
Value 
Coeff. 
Value 

b_{c1} 
4.6809*10^{17} 
b_{c7} 
1.2932*10^{7} 
b_{c2} 
7.6174*10^{15} 
b_{c8} 
0.00013677 
b_{c3} 
2.806*10^{14} 
b_{c9} 
0.0017148 
b_{c4} 
1.1676*10^{11} 
b_{c10} 
1.5399 
b_{c5} 
3.2275*10^{11} 
b_{c11} 
115.91 
b_{c6} 
3.6118*10^{8} 

For more detailed information about the classes used in owsScenarios, please see Appendix B.
1note that this is a simplified example, and other orientations exist and can be used.
2 TN 5097100000A/01 "Optical Crosstalk HELLAS" of 26th February 2001
3See table 4.1 in Appendix A.
4See reference [ESM00].
5See http://www.trolltech.com for further information.
6See reference [ESM05] for more information on the external configuration parameters available to owsSc.