October 30, 2017 | Author: Anonymous | Category: N/A
, and for that I am eternally grateful. Graeme B Shaw scd.dvi Radar System Engineering "Eternal ......
The Generalized Information Network Analysis Methodology for Distributed Satellite Systems by
Graeme B. Shaw Submitted to the Department of Aeronautics and Astronautics on October 16, 1998, in partial ful llment of the requirements for the degree of Doctor of Science A systematic analysis methodology for distributed satellite systems is developed that is generalizable and can be applied to any satellite mission in communications, sensing or navigation. The primary enabler is that almost all satellite applications involve the collection and dissemination of information and can thus be treated as modular information processing networks. This generalization allows the adoption of the mathematics for information network ow, leading to a logical classi cation scheme for satellite systems. The bene ts and issues that are characteristic of each system class are identi ed, in terms of their capability, performance and cost. The quantitative analysis methodology speci es measurable, unambiguous metrics for the cost, capability, performance and adaptability. The Capabilities are characterized by four quality of service parameters that relate to the isolation, rate, integrity and availability of the information transferred between origin-destination pairs within a market. Performance is the probability of satisfying the user's requirements for these parameters. The Cost per Function metric is the average cost incurred to provide satisfactory service to a single user, and Adapatability metrics are sensitivity indicators. Validation of the methodology is provided by a comprehensive quantitative analysis of the NAVSTAR Global Positioning System, in which the calculated capabilities agree with measured data to within 3%. The utility of the methodology for comparative analysis is highlighted in a rigorous competitive assessment of three proposed broadband communication satellite systems. Finally, detailed architectural trades for a distributed space based radar are presented to demonstrate the eectiveness of the the methodology for conceptual design. The generalized information network analysis methodology is thus identi ed as a valuable tool for space systems engineering, allowing qualitative and quantitative assessment of the impacts of system architecture, deployment strategy, schedule slip, market demographics and technical risk.
Thesis Supervisor: David W. Miller, Assistant Professor of Aeronautics and Astronautics 3
4
Acknowledgments Two or so days ago, I defended my thesis. I passed. And now, I sit here all alone in my oce, at just gone 1am (in the morning), and I have to try to voice my thanks to all the people who assisted me over the past ve years. It is appropriate, I think, that I write these acknowledgments during these early hours, since practically the whole thesis was conceived and written during the quiet hours between midnight and ve. Before discussing speci c people, I should say straight away that the actual concepts introduced in this thesis, and the motivation to pursue them, were the products of continuous collaboration with my colleagues and friends. I guess this is very tting, given the subject matter. It is conventional to begin the acknowledgments with a paragraph thanking the thesis advisor, and afterwards go on to credit personal friends for their help. This unwritten rule is being bent a little here, for very good reasons. First of all, I have actually had two advisors, and secondly, and much more importantly, they are also my friends. I will introduce them as they were to me, chronologically. Professor Daniel Hastings became my advisor the day I arrived in the United States, and he has been looking out for me ever since. His genuine care for his students is absolutely remarkable. Even after becoming the Chief Scientist of the Air Force, Dan would always be willing to nd the time to talk or meet, answer my questions, and quite unintentionally, cheer me up. I believe he taught me everything I know about the \big picture". The trust he showed in my abilities was as important to me as anything else I experienced over my ve years at MIT. It was an honor to be Dan's student. In August 1997, with at least whole year left before I could nish my ScD, Dan left MIT to take the post of Chief Scientist of the Air Force. At that time I was adopted by Professor David Miller, someone who would quickly become one of my closest friends. Dave is just a great guy and an exceptional advisor. His empathy for his graduate students is surpassed only by his boundless enthusiasm and admirable capacity to contribute to their work. The dierent perspective that Dave brought to this research was sometimes frustrating, almost always enlightening. Dave respected me as both a friend and a peer, and in doing so, gained my respect as my boss. I have so much enjoyed the last year as Dave's student, and my thesis is better for it. My oce mate and and closest friend, Raymond Sedwick, deserves a medal for tolerating me for the last three or four years. Everyone knows my quirkyness, but only Ray has had to deal with it day in and day out. His qualities as an engineer, a scientist, a teacher, a x-it man, and a friend are immeasurable. He is Abbott to my Costello, Lennon to my McCartney. He is my brother, my buddy, and I love him. Just thinking about Joe Sin eld brings a smile to my face. He has become a very good 5
friend over the past few years, sharing many common interests, namely, breakfast, brunch, lunch, afternoon tea, dinner and supper. We supplement the meals with long philosophical discussions about work, women, and the joys of all-you-can-eat seafood, and sometimes we even do stu (lifting, softball, golf). Oh yeah . . . he's a pretty good engineer as well. Jen Rochlis' sel ess friendliness, good looks and uncanny penchant for remembering movie lines means she is a lot of fun to be around. Without even trying too much, she makes everyone around her feel good about themselves. I also have to thank Jen for proofreading this thesis. She thinks I need more commas and longer sentences. Maybe. Nicole Casey is a little ray of sunshine. Her bubbly personality, and happy smile are addictive, and I am forever touched by her thoughtfulness. She apparently knows some Chemistry, but the impressive thing is her swing, and she plays a mean second base! Karen Willcox is like my sister. We have been close from the day she arrived at MIT and I can talk to her about anything. Karen and I bonded very well because of our nationalities; New Zealand and England are surprisingly similar in culture. Unfortunately, Karen is the wrong size. She's too small to avoid getting hurt when she plays rugby, and apparently too tall to date short guys like me! Jim Soldi and Guy Benson, two members of the \old guard", are also very special people to me. The memories of the many laughs we have had de ne my MIT experience. I always knew that hanging with General Zod and the Lion (\stick `em up, stick `em up") would hurt tomorrow, but, boy, would it be fun tonight. I was so pleased, and very grateful, that each of them could be at my defense. I shared an apartment with Carlin Vieri throughout my whole time at MIT. Between the work, the parties, the wine and the women, we somehow managed to forge a solid friendship. Carlin tolerated my bad habits and messy lifestyle, and I tolerated his excellent wine selection, good cooking, and impeccable tidiness. What a guy! Salma Qarnain brightened my days up as soon as she walked into the lab. She also helped me out with the GPS section of this thesis. I owe Salma big-time, but since I am going to be working (and living) with her, I am sure she will have plenty of opportunities to recover payment. Douglas Wickert is the smartest guy I ever met, and one of the nicest. Doug's Masters thesis laid down all the groundwork for the studies of distributed space based radar in this thesis. Similarly Greg Yashko, Edmund Kong, Cyrus Jilla and John Enright all contributed signi cantly to the thesis through collaboration or discussion. The coolest thing about these guys is that they were de ning the state of the art, were just down the hall, and were always good for a laugh. They made for a very exciting and fun place to work. Sharon-Leah Brown is ocially the scal manager of the lab, but unocially she is everyone's mother. I hope she realizes how important she is to all the students (and sta) 6
around here. I will certainly miss Sharon when I leave. Other people I should mention are Mike Fife, Greg Gin, Greg Dare, Marika, Ed Peikos, Angie Kelic, Lihini, Kirsten, Lee, Jake, Kurt and the members of the softball teams. All you people helped make the time I spent at MIT the best years of my life. I want to thank Dana for supporting me emotionally through so many years of hard and self-absorbed work; I doubt that I would have been able to do this ScD without her. Dana is my best friend in the whole world, and I can never repay her for the unconditional love she has given me. My uncle John and Aunty Beryl have helped me cope with the nancial burdens of almost nine years of university, and for that I am eternally grateful. My brother Christopher has also supported me throughout, giving me money, encouragement, and most importantly, love. Finally, I thank my mother. Her sel ess devotion to my well-being is unbelievable. I have always been able to rely on my Mum, and the love and help she has given me is the largest factor behind my success. I hope that when I have children, I am able to give them even a fraction of what she gave me. I love you Mummy. I dedicate this thesis to my grandfather, who I am sure is watching all of this with a great deal of pride. I did it, Grandad, I did it.
7
8
Contents 1 Introduction
1.1 The Bottom Line . . . . . . . . . . . . . . . . . 1.2 Satellite Systems in the New Millennium . . . . 1.3 Background: Analyses in Systems Engineering 1.3.1 The Systems Engineering Process . . . . 1.3.2 Modeling and Simulation . . . . . . . . 1.4 Previous Work . . . . . . . . . . . . . . . . . . 1.5 Content of the Document . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
27
27 27 29 30 35 36 37
I Development of the Generalized Analysis Methodology
39
2 Generalized Characteristics of Satellite Systems
41
2.1 Distributed satellite systems . . . . . 2.2 Abstraction to Information Networks . 2.3 Satellite System Classi cations . . . . 2.3.1 Distribution . . . . . . . . . . . 2.3.2 Architectural Homogeneity . . 2.3.3 Operational . . . . . . . . . . .
3 Distributed Satellite Systems
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
3.1 To Distribute or not to Distribute? . . . . . . . . 3.1.1 Signal Isolation Improvements . . . . . . 3.1.2 Rate and Integrity Improvements . . . . 3.1.3 Availability Improvements . . . . . . . . . 3.1.4 Reducing the Baseline Cost . . . . . . . . 3.1.5 Reducing the Failure Compensation Cost 3.2 Issues and Problems . . . . . . . . . . . . . . . . 3.2.1 Modularity Versus Complexity . . . . . . 9
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
41 42 46 46 48 49
51
51 53 54 59 65 73 76 77
3.2.2 Clusters and Constellation Management . . . . . . . . . . . . . . . . 3.2.3 Spacecraft Arrays and Coherence . . . . . . . . . . . . . . . . . . . 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81 84 86
4 Development of the Quantitative Generalized Information Network Analysis (GINA) Methodology 89
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2 Satellite Systems as Information Transfer Networks . . . . . . . . . . . . . . 90 4.2.1 De nition of the Market . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Functional Decomposition and Hierarchical Modeling . . . . . . . . 91 4.3 The Capability Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.1 Signal Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.2 Generalized Signal Isolation and Interference . . . . . . . . . . . . . 94 4.3.3 Information Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.3.4 Information Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.3.5 Information Availability . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.4 Calculating the Capability Characteristics . . . . . . . . . . . . . . . . . . . 104 4.4.1 Example Capability Calculation for a Ka-Band Communication Satellite106 4.5 Generalized Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.5.1 Time Variability of Performance . . . . . . . . . . . . . . . . . . . . 111 4.6 Calculation of the Generalized Performance . . . . . . . . . . . . . . . . . . 111 4.6.1 Example Performance Calculation for a Ka-Band Communication Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.7 The Cost per Function Metric . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.8 Calculating the Cost per Function Metric . . . . . . . . . . . . . . . . . . . 117 4.8.1 The System Lifetime Cost . . . . . . . . . . . . . . . . . . . . . . . . 117 4.8.2 The Failure Compensation Cost . . . . . . . . . . . . . . . . . . . . 118 4.8.3 The System Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.8.4 Example CPF Calculation for a Ka-Band Communication Satellite . 120 4.9 Utility of the Cost per Function Metric . . . . . . . . . . . . . . . . . . . . . 123 4.10 The Adaptability Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.10.1 Type 1 Adaptability: Elasticities . . . . . . . . . . . . . . . . . . . . 124 4.10.2 Type 2 Adaptability: Flexibility . . . . . . . . . . . . . . . . . . . . 127 4.11 Truncated GINA for Qualitative Analysis . . . . . . . . . . . . . . . . . . . 127 4.12 The GINA Procedure { Step-by-Step . . . . . . . . . . . . . . . . . . . . . . 129 4.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 10
II Case Studies and Results
131
5 The NAVSTAR Global Positioning System
135
5.1 System Overview . . . . . . . . . . . . . . . . . . . . . . 5.1.1 The GPS Space Segment . . . . . . . . . . . . . 5.1.2 The GPS Ranging Signal . . . . . . . . . . . . . 5.1.3 System Requirements . . . . . . . . . . . . . . . 5.1.4 Measured navigation performance . . . . . . . . 5.2 Fundamental Error Analysis for GPS . . . . . . . . . . . 5.3 GINA Modeling of GPS . . . . . . . . . . . . . . . . . . 5.3.1 GPS Network Architecture . . . . . . . . . . . . 5.3.2 The Constellation Module- Visibility and PDOP 5.3.3 Signal structure . . . . . . . . . . . . . . . . . . . 5.3.4 Ephemeris and Satellite clock errors . . . . . . . 5.3.5 Space loss . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Ionospheric and Tropospheric errors . . . . . . . 5.3.7 Interferers . . . . . . . . . . . . . . . . . . . . . . 5.3.8 GPS receiver model . . . . . . . . . . . . . . . . 5.4 GINA Capabilities of GPS for the Navigation Mission . 5.5 GINA Performance of GPS . . . . . . . . . . . . . . . . 5.6 The CPF Metric for GPS . . . . . . . . . . . . . . . . . 5.7 Improvements by Augmenting GPS . . . . . . . . . . . . 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
6 Comparative Analysis of Proposed Ka-Band Satellite Systems
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
6.1 The Modeled Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Proposed System Speci cations: Cyberstar . . . . . . . . . . . 6.1.2 Proposed System Speci cations: Spaceway . . . . . . . . . . . 6.1.3 Proposed System Speci cations: Celestri . . . . . . . . . . . . . 6.1.4 Information network representations: Cyberstar and Spaceway 6.1.5 Information network representations: Celestri . . . . . . . . . . 6.1.6 The Capability Characteristics . . . . . . . . . . . . . . . . . . 6.2 Generalized Performance . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 The CPF Metric: The Cost per Billable T1-Minute . . . . . . . . . . . 6.3.1 Modeling the Broadband Market . . . . . . . . . . . . . . . . . 6.3.2 Calculating the market capture . . . . . . . . . . . . . . . . . . 6.3.3 System cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135 137 138 140 141 141 146 147 148 149 150 152 153 154 154 158 158 162 164 165
167
168 168 169 171 173 173 174 188 192 192 194 200
6.3.4 Cost per Billable T1-Minute Results 6.4 Type 1 Adaptability Metrics . . . . . . . . 6.4.1 The Requirement Elasticities . . . . 6.4.2 The Technology Elasticities . . . . . 6.5 Summary . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
201 204 204 207 209
7 Techsat21; A Distributed Space Based Radar for Ground Moving Target Indication 213 7.1 Space Based Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Detecting Moving Targets in Strong Clutter Backgrounds . . . . . . . . . . 7.2.1 Locating the Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 The Radar Range Equation . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Detecting the Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Noise-Limited Detection . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Clutter-Limited Detection . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Pulse-Doppler Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 The Potential of a Symbiotic Distributed Architecture . . . . . . . . 7.3 The Techsat21 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Signal Processing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Overall System Architecture . . . . . . . . . . . . . . . . . . . . . . . 7.4 Using GINA in Design Trades for Techsat21 . . . . . . . . . . . . . . . . . 7.4.1 Goals of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Transformation of the GMTI mission into the GINA framework . . . 7.4.3 Modeling Techsat21 . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Capability Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 The Performance, CPF and Adaptability for Techsat21 Candidate Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 The CPF Metric and the System Lifetime Cost . . . . . . . . . . . . 7.5.3 Lifetime Cost Results . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Conclusions of Design Trades . . . . . . . . . . . . . . . . . . . . . . 7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214 214 215 218 218 219 224 227 231 232 233 238 242 242 243 245 251
259 259 264 267 269 272 273
8 Conclusions and Recommendations
277
A Capability Characteristics for Techsat Candidate Architectures
293
8.0.1 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
12
List of Figures 1-1 The System Engineering Process Overview . . . . . . . . . . . . . . . . . 1-2 Quality Function Deployment (QFD) . . . . . . . . . . . . . . . . . . . . .
31 33
2-1 Network representation of a simple communication system . . . . . . . . 2-2 Classes of distribution for satellite systems . . . . . . . . . . . . . . . . .
44 47
3-1 The coverage improvements oered by distribution leading to increased availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 System mass trades for a separated spacecraft interferometer . . . . . . 3-3 The propellant mass fraction for the satellites of a separated spacecraft interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4 The USCM Cost Estimating Relationship for IR Payloads . . . . . . . . 3-5 Recurring hardware cost versus constellation size for a distributed infrared imaging system with a 25 minute revisit time . . . . . . . . . . . . 3-6 Recurring hardware cost versus constellation size for a distributed infrared imaging system with a 1 hour revisit time . . . . . . . . . . . . . . 3-7 Satellite and Sensor Con gurations . . . . . . . . . . . . . . . . . . . . . . 3-8 Total system costs over the 10 year mission life of a polar orbiting weather satellite system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9 Data storage and communication data rates for a distributed imager with 25 minute revisit time, 5 minute interval between downloads . . . . . . . 4-1 4-2 4-3 4-4 4-5 4-6 4-7
Top-level network representation of a single communication satellite Detailed network representation of a communication satellite . . . . Simple linear-time-invariant system . . . . . . . . . . . . . . . . . . . . A square low-pass lter and its time-domain response . . . . . . . . . Basic antenna model . . . . . . . . . . . . . . . . . . . . . . . . . . . . A rectangular aperture distribution and its radiation pattern . . . . The basic channel model for a simple system . . . . . . . . . . . . . . 13
. . . . . . .
. . . . . . .
60 63 64 66 72 73 75 77 81 92 93 94 95 95 96 98
4-8 The probability of error is the integral under the noise probability density function from [d=2; ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4-9 The signal space representation of QPSK. The four information symbols dier in phase, while their amplitude is constant. . . . . . . . . . . . . . 103 4-10 A simple system with input signals X and Y , and an output signal Z . . 105 4-11 Capability characteristics for a modeled Ka-band communication satellite 109 4-12 Failure state probabilities for a modeled Ka-band communication satellite payload: R = 1:544Mbits/s, BER = 10,9, Av = 98%. . . . . . . . . . . . . . 113 4-13 Failure state probabilities for a modeled Ka-band communication satellite 115 4-14 Market capture pro le for a modeled Ka-band communication satellite . 121 1
5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8
The NAVSTAR GPS architecture . . . . . . . . . . . . . . . . . . . . . . . A typical Block II/IIA GPS satellite . . . . . . . . . . . . . . . . . . . . . Characteristics of the L1 and L2 . . . . . . . . . . . . . . . . . . . . . . . . PPS and SPS speci ed accuracies . . . . . . . . . . . . . . . . . . . . . . . The network representation of GPS used in GINA . . . . . . . . . . . . . A snapshot of the visibility of the GPS-24 constellation . . . . . . . . . . A snapshot of the PDOP for the GPS-24 constellation . . . . . . . . . . The probability distribution function for the visibility of the GPS constellation between 60o latitude . . . . . . . . . . . . . . . . . . . . . . . . The probability distribution function for the PDOP of the GPS constellation between 60o latitude . . . . . . . . . . . . . . . . . . . . . . . . . . The probability distribution function for the average elevation angle of GPS satellites in view of ground locations between 60o latitude . . . . Comparison of the GPS broadcast ephemeris with the precise orbital solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The probability distribution function for range errors attributable to ephemeris errors and unmodeled satellite clock errors . . . . . . . . . . . The Capability Characteristics of GPS-24; PPS . . . . . . . . . . . . . . . The Capability Characteristics of GPS-24; SPS; SA o . . . . . . . . . . The Capability Characteristics of the PPS with 2, 4 or 6 satellite failures The Capability Characteristics of the SPS with 2, 4 or 6 satellite failures The Performance of GPS-24 SPS in satisfying 2drms (90%) navigation accuracy; Satellite failure rate=0.0035 per year . . . . . . . . . . . . . . . The Capability Characteristics of the PPS service for GPS augmented with 3 GEO satellites, after zero, two, four, or six satellite failures . . .
5-9
5-10
5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18
14
136 138 139 140 147 149 150 151 152 153 154 155 159 159 161 161 162 164
6-1 Information network for Ka-band communications through Cyberstar or Spaceway satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2 Information network for Ka-band communications through the Celestri system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 The probability distribution function for the elevation angle to a Cyberstar satellite from the ground locations served by the system . . . . . . . 6-4 The probability distribution function for the elevation angle to a Spaceway satellite from the ground locations served by the system . . . . . . . 6-5 The probability distribution function for the elevation angle of the highest celestri satellite in view of each ground location between 60o latitude . 6-6 The Capability Characteristics of Cyberstar1 in addressing the broadband communications market in Western Europe . . . . . . . . . . . . . . 6-7 The Capability Characteristics of Cyberstar2 in addressing the broadband communications market in North America . . . . . . . . . . . . . . 6-8 The Capability Characteristics of Cyberstar3 in addressing the broadband communications market in the Paci c Rim . . . . . . . . . . . . . . 6-9 The Capability Characteristics of Spaceway1 in addressing the broadband communications market in North America . . . . . . . . . . . . . . . . . . 6-10 The Capability Characteristics of Spaceway2 in addressing the broadband communications market in Western Europe . . . . . . . . . . . . . . . . . 6-11 The Capability Characteristics of Spaceway3 in addressing the broadband communications market in South America . . . . . . . . . . . . . . . . . . 6-12 The Capability Characteristics of Spaceway4 in addressing the broadband communications market in the Paci c Rim . . . . . . . . . . . . . . . . . 6-13 The Capability Characteristics of the Celestri network in addressing the global broadband communications market . . . . . . . . . . . . . . . . . . 6-14 Failure state probabilities for a typical (modeled) Ka-band GEO communication satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15 The Capability Characteristics of the degraded Celestri network after losing all seven spares and any other satellite . . . . . . . . . . . . . . . . 6-16 Failure probability for the Celestri constellation, relative to a 95% availability requirement for T1 connections, 10,9 BER. . . . . . . . . . . . . . 6-17 Broadband market growth models . . . . . . . . . . . . . . . . . . . . . . 6-18 The last-mile market in 2005, GDP distribution . . . . . . . . . . . . . . 6-19 Cyberstar's market capture map; exponential market model in 2005, GDP distribution:2400GMT . . . . . . . . . . . . . . . . . . . . . . . . . .
15
173 175 176 176 177 179 180 181 183 184 185 186 187 190 191 192 193 194 195
6-20 Celestri's market capture map; exponential market model in 2005, GDP distribution; 1200GMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 6-21 The market capture pro le for the Cyberstar system . . . . . . . . . . . 197 6-22 The market capture pro le for the Spaceway system . . . . . . . . . . . . 198 6-23 The market capture pro le for the Celestri system; both market models; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 6-24 The market capture pro les of the Cyberstar satellites; exponential market; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6-25 The market capture pro le for the Spaceway satellites; exponential market; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6-26 The market capture pro le for a typical Celestri satellite; exponential market; GDP distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6-27 The Cost per billable T1-minute metric for Cyberstar, Spaceway and Celestri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 6-28 The rate elasticity of the CPF for Cyberstar, Spaceway and Celestri . . 206 6-29 The manufacture cost elasticity of the CPF for Cyberstar, Spaceway and Celestri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6-30 The launch cost elasticity of the CPF for Cyberstar, Spaceway and Celestri 209 6-31 The failure rate elasticity of the CPF for Cyberstar, Spaceway and Celestri 210 7-1 Space-based radar geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 217 7-2 The Neuvy approximation for the probability of detection of a Swerling 2 target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7-3 Frequency spectrum of a sequence of square radar pulses; PRF=3000Hz, 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12
pulse length = 1=12000 seconds, and dwell time Td = 1=300 seconds (10 pulses) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simpli ed block-diagram for pulse-doppler radar processing . . . . . . . Artist's impression of the operational Techsat21 system . . . . . . . . . . The relationship between the aperture distribution, the far- eld amplitude response, the spatial frequency and the power response. . . . . . . Simpli ed Techsat21 Radar Architecture . . . . . . . . . . . . . . . . . . Network diagram for Techsat21 with ns = 4 satellites . . . . . . . . . . . Network diagram for Techsat21 with ns = 8 satellites . . . . . . . . . . . Network diagram for Techsat21 with ns = 11 satellites . . . . . . . . . . . Grazing angle probability distribution function . . . . . . . . . . . . . . . Clutter re ectivity, o , as a function of grazing angle, for several terrains environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
227 228 232 234 239 245 245 246 247 248
7-13 Far eld power response for an unrestricted minimum redundancy array; ns = 4; Dc = 100m; Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14 Far eld power response for an unrestricted minimum redundancy array; ns = 8; Dc = 100m; Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15 Far eld power response for an unrestricted minimum redundancy array; ns = 11; Dc = 100m; Ds = 2m . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16 Capability Characteristics for candidate Techsat21 architecture: ns = 8; Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz . . . . . 7-17 Capability Characteristics for candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz . . . . 7-18 Far eld power response for candidate Techsat21 architecture: ns = 8; Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz . . . . . 7-19 Far eld power response for a candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz . . . . 7-20 Capability Characteristics for candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 400W ;Ds = 2m; PRF=3000Hz . . . . . 7-21 Capability Characteristics for candidate Techsat21 architectures at a 1 7-22 7-23 7-24 7-25
minute update of a 105km2 theater; requirements are PD = 0:75, Availability=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The state probabilities for dierent numbers of satellite failures in the 8 satellite cluster; s = 0:026 . . . . . . . . . . . . . . . . . . . . . . . . . . . The generalized performance of the dierent architectures subject to requirements for a 1 minute update of a 105km2 theater with PD = 0:75 and Availability=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The system lifetime cost of dierent Techsat21 architectures subject to requirements for a 1 minute update of a 105km2 theater with PD = 0:75 and Availability=0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elasticities of the lifetime cost for the 11 satellite cluster; PD : 0:75 0:9; Availability: 0:9 0:95 ; Pt : 200W 100W . . . . . . . . . . . . . . . . . .
249 250 251 255 256 257 258 260 262 263 264 269
!
!
!
271
A-1 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 A-2 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 A-3 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 17
A-4 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-5 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6 Techsat: ns = 8, Generalized array, 100m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-10 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-14 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-15 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-16 Techsat: ns = 8, Generalized array, 100m baseline, D = 4m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18 Techsat: ns = 8, Generalized array, 100m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-19 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-21 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314
A-22 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24 Techsat: ns = 8, Restricted array, 100m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-28 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-29 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-31 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-32 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34 Techsat: ns = 8, Restricted array, 100m baseline, D = 4m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36 Techsat: ns = 8, Restricted array, 100m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332
A-40 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-41 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-42 Techsat: ns = 11, Generalized array, 100m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-43 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-45 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-46 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48 Techsat: ns = 11, Generalized array, 100m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-52 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54 Techsat: ns = 11, Restricted array, 100m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-57 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350
A-58 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-59 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-60 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-61 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-63 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-64 Techsat: ns = 11, Restricted array, 100m baseline, D = 4m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-65 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-66 Techsat: ns = 11, Restricted array, 100m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-67 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-68 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-69 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-70 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-71 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-72 Techsat: ns = 11, Generalized array, 200m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-73 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-74 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-75 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368
A-76 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-77 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-78 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-79 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-80 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-81 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-82 Techsat: ns = 11, Generalized array, 200m baseline, D = 4m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-83 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-84 Techsat: ns = 11, Generalized array, 200m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-85 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-86 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-87 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-88 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-89 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-90 Techsat: ns = 11, Restricted array, 200m baseline, D = 2m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-91 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-92 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-93 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 200W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386
A-94 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 200W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-95 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-96 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-97 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 400W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-98 Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 400W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-99 Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-100Techsat: ns = 11, Restricted array, 200m baseline, D = 4m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-101Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 100W , PRF=1500Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-102Techsat: ns = 11, Restricted array, 200m baseline, D = 1m, Pav = 100W , PRF=3000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
387 388 389 390 391 392 393 394 395
24
List of Tables 1.1 Example system attributes and measurables for the requirements de nition of satellite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
2.1 Satellite system classi cations . . . . . . . . . . . . . . . . . . . . . . . . .
50
3.1 Factor of improvement in the energy per symbol to noise density ratio for distributed clusters compared to singular deployments . . . . . . . . 3.2 Distributed infrared imaging system parameters . . . . . . . . . . . . . .
58 72
4.1 4.2 4.3 4.4
System parameters for a modeled Ka-band communication satellite . . . Cost per Function metrics for example applications . . . . . . . . . . . . System cost pro le for a single Ka-band communication satellite . . . . Qualitative comparison between Techsat21 and Discoverer-II space based radar concepts using truncated GINA . . . . . . . . . . . . . . . . . . . .
107 116 122 128
5.1 PPS measured accuracies in terms of SEP and CEP navigation errors . 141 5.2 Calculated PPS and SPS accuracies in terms of SEP (50%) and 2drms (90%) navigation errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.3 System cost pro le for GPS . . . . . . . . . . . . . . . . . . . . . . . . . . 163 6.1 6.2 6.3 6.4 6.5 6.6 6.7
System parameters for Cyberstar . . . . . . . . . . . . . . . . . . . . . . . System parameters for Spaceway . . . . . . . . . . . . . . . . . . . . . . . System parameters for Celestri . . . . . . . . . . . . . . . . . . . . . . . . System cost pro le for Cyberstar; constant year FY96$ . . . . . . . . . . System cost pro le for Spaceway; constant year FY96$ . . . . . . . . . . System cost pro le for Celestri; constant year FY96$ . . . . . . . . . . . Lifetime costs CL for the modeled systems (net present value in FY96$)
170 171 172 202 203 203 203
7.1 Minimum redundancy arrays, up to N = 11 elements; the number sequence indicates relative spacings . . . . . . . . . . . . . . . . . . . . . . . 237 7.2 Test Matrix for Analysis of Techsat21 . . . . . . . . . . . . . . . . . . . . 243 25
7.3 7.4 7.5 7.6 7.7
Modeled Techsat21 system parameters held constant across all cases . . System lifetime costs for Architecture 1 (8 sats) . . . . . . . . . . . . . . System lifetime costs for Architecture 2 (11 sats) . . . . . . . . . . . . . System lifetime costs for Architecture 3 (8 sats, Centralized Processor) System lifetime costs for Architecture 4 (11 sats, Centralized Processor)
26
244 267 267 268 268
Chapter 1
Introduction 1.1 The Bottom Line It might seem strange to begin a document with a statement of the nal conclusions, right upfront in the rst few paragraphs. This document is however an engineering thesis and not prose, so there is no need for suspense. In fact, knowing the eventual destination adds meaning and context to each page. Thus it is stated immediately, Almost all envisioned satellite systems are information disseminators that can be represented as information transfer networks. These systems are characterized by a set of standardized and measurable parameters for the quality of service they provide. Using these parameters to de ne quanti able cost-eectiveness and sensitivity metrics, a generalized system analysis methodology for satellite systems can be formulated. This is useful for Systems Engineering (SE) of satellite systems as well as competitive analysis and investment decision making. The next two hundred pages or so go on to explain and qualify these statements, using fundamental science and principles of satellite systems engineering. The development of this formal methodology, that is fully compatible with conventional SE practices, is the main contribution of this work to the state-of-the-art. The interested engineer that wants to implement this methodology for real analyses should continue through the whole document. The higher-level decision-maker that needs only a working appreciation of the concepts can nish this chapter, then ip straight to the conclusions.
1.2 Satellite Systems in the New Millennium We are entering a new era in the utilization of satellite systems. In the next ten to twenty years, the commercial world will see the development of four types of space-based systems 27
that will be available to both friendly and unfriendly nations, corporations, and individuals on a worldwide basis:
Global positioning and navigation services While the DoD already has GPS, other countries are developing equivalent systems or augmenting the existing one; similar capabilities will be available through the development of personal communication systems. They will enable navigation with an accuracy of less than one meter.
Global communication services Several systems are already in production or on-
orbit, such as Iridium, Globalstar and ICO. These systems will provide universal communications services between mobile individuals to almost anyplace on the surface of the Earth. These systems will work transparently with local cellular systems and will enable rapid telecommunications development in underdeveloped parts of the world.
Information transfer services These services will enable data transfer between any
two points on the surface of the Earth at rates ranging from a few bits per second for paging, to mega- and gigabits per second for multimedia applications. Proposed systems include Orbcomm, Spaceway, Cyberstar, Astrolink and Celestri/Teledesic. Individual users will be able to access large amounts of data on demand.
Global reconnaissance services These services will provide commercial users with multispectral data from almost any point on the surface of the Earth with meterscale resolution. This data will span the range from the radio frequencies (RF) to the infrared (IR) through the visible into the ultraviolet (UV). This information will be available within hours of a viewing opportunity and on the order of a day from the time of a request. Proposed and existing systems include, SPOT, Orbimage, World View, Earthwatch and RadarSat.
It will therefore be possible for persons of means to locate themselves on any point on the Earth, communicate both by voice and computer to other points on the Earth, and have a good picture of the local environment. Both the services and the technologies that enable them will be commercially available all over the world. The commercial potential of these services will fuel their continued development, and companies will be forced toward even more advanced and ambitious concepts to gain competitive advantage. Within the military, increased political pressures to move American troops out-of-harmsway, while still being able to project global superiority, is leading to an increased reliance on space assets. Indeed, the Air Force's recent doctrine of Global Engagement has started the transition from an \air and space force" to a \space and air force". This too will drive the development of increasingly sophisticated satellite systems for communications, remote 28
sensing, navigation and even weapons delivery. Unfortunately, with the Cold War over and no identi able single adversary to spur funding, DoD budget controls have become more restrictive. Any new military satellite systems must therefore not only improve the capabilities to wage a modern war, but also provide utility during peace time operations, and do so at a lower cost. These factors will drive a move towards a new way of doing business in space. No longer will it be acceptable to rely on proven technologies, processes and practices to minimize risk. Ventures with higher levels of risk in performance, schedule and cost will be undertaken in all sectors. To support this shift, higher levels of technology in spacecraft components, and improved manufacturing processes will be needed. However, the largest and most immediate bene t will likely come from improved systems engineering practices. The existing paradigm for satellite system architectures is based on years of experience, but re ects outdated technology and budget climates that were very dierent from today's. By removing the preconceived notions about how to design eective space systems, and instead, starting from a clean sheet, enormous bene ts may be possible in capabilities, performance and cost. The potential oered by improved systems engineering is made clear by the following excerpt: The need for a well-integrated approach to system design and development can be better appreciated when it is realized that approximately eighty to ninety percent of the development cost of a large system is predetermined by the time only ve to ten percent of the development eort has been completed [1].
As a result there is a de nite need for sophisticated analysis techniques that re ect the newer architectures and can reduce risk by accurate predictions of capabilities and cost. To be useful, any new analysis methodology must be compatible with the formal SE process.
1.3 Background: Analyses in Systems Engineering The International Council on Systems Engineering (INCOSE) Handbook [1] gives the following de nitions:
System | An integrated set of elements to accomplish a de ned objective. These
include hardware, software, rmware, people, information, techniques, facilities, services, and other support elements.
Systems Engineering | An interdisciplinary approach and means to enable the realization of successful systems
29
Systems Engineering Process |A logical, systematic process devised to accomplish system engineering tasks.
The basic tenet behind SE is to consider the system and its functionality as a whole, rather than a collection of independent components. The individual parts of a system do not have to be optimal for the system to perform optimally, or at least satisfactorily. The SE process is an iterative procedure for deriving or de ning requirements at each hierarchical level of the system, beginning at the top, and owing down these requirements in a series of steps that eventually leads to a preferred system concept. Further iteration and design re nement leads successively to preliminary design, detailed design and nally, approved design. In fact, SE activities are carried out in almost all phases of a project's lifecycle, from the system analysis, requirements de nition and conceptual design at the program's inception, through production, operations, maintenance, replacement, and eventual disposal at the end of life (EOL). For this thesis, it is the role played by systems analyses that is primarily of interest, and this is usually most important during the design phase. To clarify how system analysis is used within the SE process, the actual process must be explained.
1.3.1 The Systems Engineering Process The SE process is good engineering practice and should be applied by all engineers, just as the scienti c method is applied by scientists. The basic steps in the systems engineering process, as de ned by INCOSE [1], are: (1) De ne the system objectives (User's Needs); (2) Establish performance requirements (Requirements Analysis); (3) Establish the functionality (Functional Analysis); (4) Evolve design and operations concepts (Architecture Synthesis); (5) Select a baseline (through Cost/Bene t Trades); (6) Verify the baseline meets requirements (User's Needs); and, (7) Iterate the process through lower level trades (Decomposition). This process is shown in Figure 1-1. The requirements loop establishes the quanti able performance requirements that represent the user's needs, and from them derives the functional requirements for each functional component in the architecture. The design loop translates those requirements into design and operations concepts. The veri cation loop checks the capability of the solutions to determine if they match the original requirements. A control loop insures that only the most cost-eective solutions are selected. It should be emphasized that there is big dierence between de ning what needs to be done versus how well it must be done. If there is to be no ambiguity about knowing when a job is completed or when a product is acceptable, requirements must be expressed in measurable terms. Also, a requirement is not a requirement unless it can be veri ed [1]. 30
Verification
Control
System System Analysis Analysis & & Control Control
Synthesis Synthesis
Design
Functional Functional Analysis Analysis
Reqts.
Requirements Requirements Analysis Analysis
outputs
31
The objective of requirements analysis is to translate the users' needs into a quanti able set of performance requirements that can be used to derive design requirements. The users' needs can often be characterized in measurable categories such as Quantity, Quality, Coverage, Timeliness, and Availability. INCOSE [1] give examples of these categories for two dierent types of satellite systems, reproduced in Table 1.1. At this point the reader must make a mental note to revisit this table after completing the rest of the thesis. The Generalized Analysis methodology is based on quality of service attributes for information transfer systems, and although developed independently of (and concurrently with) INCOSE's formalized documentation of SE practices, arrives at almost identical categories for the measurable requirements of these satellite systems. One of the contributions of this work has been to standardize this categorization such that there is no subjectivity in its de nition. Modeling and analysis are used to convert these performance requirements (availability, etc.) into suitable requirements that the hardware and software designers can relate to more easily (power, aperture, etc.). Functional decomposition tools such as functional block diagrams, functional ow diagrams and timelines are useful in developing requirements. Quality Function Deployment (QFD) [2] is a tool for requirements owdown that is rapidly gaining popularity in America, after being developed in Japan during the 1970's.
Requirements Analysis
Figure 1-1: The System Engineering Process Overview [1]
inputs
Table 1.1: Example system attributes and measurables for the requirements de nition
of satellite systems, from INCOSE [1]
Measurable Attribute Surveillance Satellite Communication Satellite Quantity Frames/day, Sq. Miles/day Throughput (bits/s) Quality Resolution (ft) Bit error rate (BER) Coverage Latitude, Longitude Latitude, Longitude Timeliness Revisit time (hrs), Delivery time (sec) Channel availability on demand (min) Availability Launch preparation time (days) Bandwidth under stressed conditions (Hz)
The essential characteristics of this method, sometimes called the \House of Quality", are shown in Figure 1-2. QFD systematically translates customer requirements (\voice of the customer") into design requirements (\voice of the engineer") using a Relationship Matrix that correlates design features (power, aperture, etc.) with customer requirements. The strength of the correlation between each pair is estimated subjectively, and used to determine the most important design drivers. The chosen values of the design parameters are entered along the bottom of the Relationship Matrix, and then benchmarked against the corresponding values for competing systems in the next row. The Requirements Correlation Matrix on the top of the diagram is used to compare the design features against each other to indicate if they are supportive or competing. For example, a surveillance satellite may need a wide eld-of-view sensor to achieve high search rates, but this opposes the need for a high signal to noise ratio (SNR) derived from target detection requirements. QFD owdown of requirements to the next system level is achieved by copying the values of each of the design variables into a new QFD diagram, this time in the requirements column, as shown in Figure 1-2. Note that the actual numerical values that are used for these design variables are the result of separate system analyses and trade studies to determine how best to meet the customer requirements. In this way, QFD is not a system analysis tool as such, but rather a way of organizing requirements owdown. Furthermore, the subjectivity of the method means that the con dence in the results depends on the experience and skills of the individuals involved. For large, complicated systems, more sophisticated modeling and simulation techniques are required that can predict the system's capabilities in satisfying user needs, given a set of design variables for the system components. This modeling is important for sizing and design of the functional components, for the requirement veri cation process, and for the requirement sensitivity analysis. 32
Requirements Correlation Matrix
xx x Features
Flow down of requirements to successive levels
Requirements Relationship Matrix
Values
{
{
Best Worst
Figure 1-2: Quality Function Deployment (QFD) [1]
Functional Analysis Again, drawing from INCOSE's de nitions, \. . . the objective of the Functional Analysis task is to create a functional architecture that can provide the foundation for de ning the system architecture through allocation of functions and subfunctions to hardware/software and operations. It should be clearly understood that the term `functional architecture' only describes the hierarchy of decomposed functions and the allocation of performance requirements to the functions within that hierarchy. It does not describe either the hardware architecture or the software architecture of the system. Those architectures are developed during the System Synthesis phase of the systems engineering process. . . [Functional Analysis] describes what the system will do, not how it will do it" [1]. One of the best tools for de ning the functional architecture are functional ow diagrams (FFD's). These are multi-tier, step-by-step decompositions of the system functional
ow, with blocks representing each separate function. FFDs are useful to de ne the detailed operational sequences for the system, and might include functional representations of hardware, software, personnel, facilities and procedural actions. Modeling and simulation can be used to verify the interpretation, de nition or viability of the functional decomposition. The modeling and simulation allow the capabilities of the functional architecture to be compared with the system requirements derived from the user needs, so that the architecture can be made to satisfy the mission objectives. The output of the Functional Analysis phase is therefore an FFD hierarchy with each function at the lowest possible level uniquely described, and veri ed by detailed modeling and simulation. 33
System Analysis: Trade Studies Trade studies provide an objective basis for deciding between alternative approaches to the solution of an engineering problem. Clearly, the mechanism for performing trade studies should be based on objectively quantifying the impact of the decision on the system's ability to carry out the mission objectives that represent the user needs. Unfortunately, trade studies will often be carried out using selection criteria that are not directly related to the mission objectives, but rather to the immediate engineering problem at hand. Unless the engineer properly understands the interaction between all the functions in the architecture, this approach may not capture the real impact of the trade on the overall mission. Selection criteria must be chosen very carefully to properly represent the impact of any decision made. Furthermore, most published methods for trade studies involve biasing the analysis with subjective weighting factors for each selection criteria [1]. This may be acceptable if the engineer is experienced and the functional architecture is well understood, but can lead to incorrect analysis in other cases. System analysis that quanti es the impact of each decision on the overall system operations is more accurate, but usually takes more time and resources [3].
System Architecture Synthesis The main objective of the System Architecture Synthesis phase is to create a system architecture that: (1) satis es the overall mission requirements; (2) implements the functional architecture; and, (3) is acceptably close to the true optimum within the constraints of time, budget, available knowledge and skills, and other resources [1]. The process of Architecture Synthesis is essentially a giant trade-study. The best alternative is chosen from a set of candidate system architectures for which there is reasonable certainty that at least one of them is acceptably close to the true optimum. De ning the set of alternatives involves owing requirements down from the functional architecture, to de ne a set of element options for each component of the system. System elements are the physical entities that make up the system. By selecting a range of elements for each component, a set of system architectures can be de ned. Of course, modeling and analysis must be used to verify that all the considered system architectures satisfy the system requirements. Measuring the \best" then involves functional ow analysis or other such modeling techniques, using selection criteria that represent the ability of the system to ful ll mission requirements at the lowest costs, within resource constraints, and with acceptable risk. In the interest of ecient analysis, a minimal set of criteria should be used, including only the most signi cant ones that are sucient to distinguish the optimum from the other contenders, and no more [1]. 34
1.3.2 Modeling and Simulation Summarizing then, the most important uses of modeling and system analysis in the design phases of the SE process are:
Requirements Analysis: to determine and measure impacts of candidate requirements Functional Analysis: to assess capabilities of functionally decomposed architectures Trade studies: to accurately determine the impacts of design decisions System Synthesis: to evaluate candidate options against selection criteria Note that a single modeling and analysis methodology based on the hierarchical functional architecture of the system could be used in all four phases of the SE process. By simply adding or reducing the level of detail, or by moving up or down the hierarchy, a single consistent model could be used and re ned throughout the process. Further, if the modeled parameters are direct representations of the measurable categories of the mission requirements, then the models have a clear and meaningful interpretation. Now, although there are many commercially available software tools that could in theory perform this kind of analysis1 there is no well-publicized generalization of the procedural logic that should be followed in order to obtain objective, relevant, and quantitative results. Rather, the tools mostly provide the computing environment for engineers to develop custom, application-speci c analyses. Essentially, system engineering requires a lot of book-keeping, and this is where the commercial tools are most useful. They do not however, instruct or guide the engineer as to what variables, parameters or requirements are important. The reason for this is to provide maximum exibility across the enormous range of potential engineering applications for which these tools can be used. However, for satellite systems, there are some basic similarities across nearly all cases that suggests that such a generalization is possible. Speci cally, the functional goal of most satellite systems is to transfer information-bearing signals between remote locations. This common link permits the adoption of generalized measures for capability, requirements, performance, cost and sensitivity. These generalized metrics would impose additional structure on satellite systems analyses, removing a lot of subjectivity, standardizing the procedure and allowing analyses to be performed quickly and eciently. The methodology would not replace the existing tools, but instead guide the engineer on how best to use them. In short, a generalized methodology would simply organize the thought-process needed to perform system analysis. 1 It is inappropriate
to discuss the speci c commercial tools in an archival document, especially since they change every few months. However, INCOSE maintains an updated database of the available software tools for SE on their World Wide Web site at URL=http://www.incose.org/.
35
There is thus both the motivation and the opportunity to develop a generalized, quantitative modeling and analysis methodology for satellite systems.
1.4 Previous Work The generalized analysis methodology described in this thesis builds upon some standard texts on space systems design, and also on recent application-speci c studies that adopted a similar approach to the analysis. First of all, the classic reference Space Mission Analysis and Design, edited by Wertz and Larson, and hereforth referred to as SMAD [3], is perhaps the most comprehensive presentation of the concepts, the science and engineering principles, and design techniques associated with unmanned space missions. Each chapter of this book is written by leading specialists, and discusses a dierent aspect of either the design process or the satellite system itself. The usefulness of this text is unquestioned, and the fundamental engineering concepts it presents are the underpinnings of a great deal of this research. In particular, the concept of a lifetime system cost that includes expected compensation for failures, as described by Hecht in Chapter 19 of SMAD [3], is a key feature of the generalized analysis. The only shortcomings of SMAD are that it lacks a detailed treatment of systematic analysis methodologies and, at present, does not address the features speci c to distributed satellite systems. The research presented in this thesis attempts to ll these needs. The de nition of a measurable metric for the lifetime cost, normalized by the functional performance, is a principal contribution of this research. This metric is equivalent to amortizing the lifetime system cost over all the satis ed users of the system, a concept used very eectively by Gumbert et al [4] in a comparative study of the proposed mobile communication satellite systems. In that work, several proposed satellite communication systems (Iridium, Globalstar, ICO, etc.) were compared on the basis of the smallest cost per billable voice-circuit minute that the company could support, while still achieving an acceptable rate of return on their investment. The calculation of this metric involved detailed simulation of the dierent systems in realistic market scenarios, to determine the maximum number of users (voice-circuits) that could be addressed, and estimation of the lifetime system cost, accounting for development, construction, launch and operations. The results suggested that market penetration and not system architecture was the dominant factor in achieving low values for the cost per billable voice-circuit minute. In a follow-up study, Kelic et al [5] applied a similar technique to the broadband data communications market, evaluating the cost per billable T1-minute for various proposed satellite systems2. 2 T1
is a 1.544 Mbits/sec data rate
36
The conclusions of that study were that market uncertainty had a larger impact than system architecture. A major goal of this research was to generalize and extend the concept of the cost per billable minute metric, such that it could be applied to more general satellite applications than just communications. Also, the broadband case has been revisited, with additional consideration for the eects of reliability, a feature that was missing from both the previous studies. The notion of designing a system to optimize a cost-eectiveness metric was used recently in two studies by Jilla et al [6] and Wickert et al [7]. Jilla et al [6] applied Markov techniques to modular decompositions of separated spacecraft interferometers in order to predict system reliability and degraded operability. The reliability predictions were used to determine the cost-eectiveness of several alternate architectures. The key result was that modular, multifunctional designs improved the reliability, and supported graceful degradation, thus realizing higher cost-eectiveness than dedicated single-function designs. The work is signi cant not only for its conclusions, but also for the systematic application of functional ow models, Markov models and quanti able cost-eectiveness metrics. The techniques presented in this thesis are complementary generalizations of the methods used by Jilla. In a feasibility assessment of performing the next generation Airborne Warning and Control System (AWACS) mission from a space based radar platform, Wickert et al [7] showed that a distributed architecture oered signi cant cost-savings, improved capabilities and increased overall reliability compared to monolithic designs. The cost metric used was the cost to initial operating capability (IOC), and included contributions from development, construction, launch, and reliability expenditures. The design process minimized the IOC cost with respect to system architecture variables, while ensuring compliance with established functional requirements. The overlap between Wickert's work and this research are: (1) the form of the requirements de nition, that could be restated in the same terms as used consistently in this thesis; (2) the adoption of a quanti able metric for the cost to provide a constant level of performance; and, (3) the investigation of distributed concepts of operations. Indeed, the results of Wickert's work were motivating in nding the general characteristics of distributed systems that lead to improved capabilities and lower costs for a wide variety of missions.
1.5 Content of the Document The thesis is divided into two parts; Part 1 comprising Chapters 2{4 contains the development of the generalized analysis methodology. Chapter 2 classi es the generalizable characteristics of satellite systems, and qualitatively introduces the concepts that will be 37
needed in later chapters. The classi cations and generalizations developed in Chapter 2 are used qualitatively in Chapter 3 for a detailed discussion of distributed satellite systems. Chapter 4 is the crux of the work, and succinctly describes the quantitative Generalized Information Network Analysis (GINA) for satellite systems. Part 2 begins at Chapter 5 and includes the case studies, representing detailed, quantitative applications of the GINA methodology. The Global Positioning System is analyzed in Chapter 5, and is used primarily as a validation of the technique for an existing system. A comparative analysis of proposed broadband communication satellite systems is presented in Chapter 6. The last case study is in Chapter 7, where GINA is applied for the (real) design of a proposed military distributed space-based radar. Finally, Chapter 8 states the conclusions and recommendations for future work. Please note that each chapter is intended to be somewhat stand-alone, to obviate endless page ipping. Unfortunately, this means that there is a little repetition of content across chapters. This is a small price to pay for clarity.
38
Part I
Development of the Generalized Analysis Methodology
39
Chapter 2
Generalized Characteristics of Satellite Systems The primary goal of this research is to develop a consistent methodology for quanti able analysis of all satellite systems, spanning all likely applications. The emphasis of this chapter is to introduce the concepts that are needed to construct this generalized analysis. This involves the identi cation of the characteristics that are general to all satellite systems, regardless of application, and also the de nition of a framework for classifying space system architectures.
2.1 Distributed satellite systems Recently, increases in the available processing power, improvements in navigation, and advances in the manufacturing process have all made the concept of a distributed satellite system feasible. The term \distributed satellite system" is used to refer to a system of many satellites designed to operate in a coordinated way in order to perform some speci c function. This de nition encompasses a wide range of possible applications in the commercial, civilian and military sectors. The advantages oered by such systems can mean improvements in performance, cost and survivability compared to the traditional single-satellite deployments. This makes their implementation attractive and inevitable. The term \distributed satellite system" can have two dierent meanings: 1. A system of many satellites that are distributed in space to satisfy a global (nonlocal) demand. Broad coverage requirements necessitate a separation of the satellite resources. At any time, the system supports only single-fold coverage of a target region. The local demand of each region is served by the single satellite in view. Here, the term "distribution" refers to the fact that the system is made up of many satellites 41
that work together to satisfy a global demand. 2. A system of satellites that gives multifold coverage of target regions. The system therefore has more satellites than the minimum necessary to satisfy coverage requirements. A subset of satellites that are instantaneously in view of a common target can be grouped as a cluster. The satellites in the cluster operate together to satisfy the local demand within their eld of view. Note that the cluster may be formed by a group of formation- ying satellites, or from any subset of satellites that instantaneously share a common eld of regard. The cluster size and orientation may change in time, as a result of orbital dynamics or commanded actions. In any case, the number of satellites in the cluster is equal to the level of multifold coverage. In this context, \distribution" refers to the fact that several satellites work together to satisfy a local demand. The entire system satis es the global demand. The most important characteristic of all distributed systems, common to both of the above concepts, is that more than one satellite is used to satisfy the overall (global) demand. This is the basic distinction between a distributed and a singularly-deployed system. Within the classi cation of a distributed system, the main dierence between the two concepts described above lies in the way that the local demand is served. Speci cally, the distinction is the number of satellites used to satisfy this local demand: the cluster size ns thus characterizes the level of distribution, with larger cluster sizes corresponding to higher levels of distribution. The lowest level of distribution, with a cluster size of one, corresponds to the rst meaning of distribution described above.
2.2 Abstraction to Information Networks All current satellite applications provide some kind of service in communications, sensing, or navigation. The common thread linking these applications is that the satellite system must essentially perform the task of collection and dissemination of information. Data that contains pertinent information is gathered by the satellite, either from other components of the system (on the ground, in the air or in space) or from the environment (local or remote). Some interpretation of the data may be performed, and then the satellite disseminates the information to other system components. The generalization made is that all satellite systems are basically information transfer systems, and that ensuring information ow through the system is the overall mission objective. This is easily understood for communication and remote sensing systems. Perhaps more surprising is that navigation systems such as GPS are also information disseminators. The GPS satellites use the information uploaded from the control segment to construct a signal which is transmitted to the ground. GPS 42
receivers can use the information in the signal, including not only the navigation message contained therein, but also the phase of the signal itself, to determine a navigation solution. As with communications and remote sensing, the performance of the system relies on the
ow of information through the satellite network. While the format and routing of the information being transferred may be dierent for dierent applications, the physics characteristic of information transfer systems are, of course, invariant. This common thread linking all systems (navigation, surveillance, communications, and imaging) establishes a context for a generalized analysis, and is particularly useful in the study of distributed systems. To generalize, satellite systems can be represented as information processing networks, with nodes for each satellite, subsystem or ground station. The satellite network connects a set of source nodes to a remote set of sink nodes, and in doing so, addresses a demand for the transfer of information between them. Figure 2-1 graphically represents a simpli ed version of such a network for a communication system consisting of three satellites and two gateway ground stations. The system transfers data between users distributed throughout its coverage area, using several spot beams, which are the input and output interfaces for the satellite nodes. The satellites can also route information through ground stations. Even in this simple example, there are many possible routes for information to travel through the network. Some paths involve only satellites nodes, while others involve both satellites and ground stations. Accepting the abstraction of satellite systems to information networks allows satellite system analysis to be treated with the well-developed mathematics of network ow and information theory. The principles of network ow apply to the overall routing and ow of information, while the transmission of information over each individual link is governed by the rules of information theory. The relevant concepts are discussed in detail in Chapter 4 within the development of the quantitative generalized analysis framework. For now, it suces to be aware of only the most important consequences of the abstraction. The information symbol is the atomic piece of information demanded by the end-users. For communication systems the symbol is either a single bit or a collection of bits. For imaging systems, the symbol is an image of a scene. For a navigation system, the symbol is a user navigation solution. The NAVSTAR GPS system is an interesting example because it addresses this demand without transferring user navigation solutions through its satellites; they only relay their position and time to the users. With this information from at least four satellites, the user terminal can calculate the navigation solution, assembling the information symbol from several constituent parts. To be a contributing element to the system, each satellite node must receive information from some other node, be it a source, a ground station or another satellite. Once this 43
Sources
Sat 1
Sinks
Gateway
Sources
Sat 2
Sinks
Gateway
Sources
Sat 3
Sinks
Figure 2-1: Network representation of a simple communication system information has been received, the satellite may perform some processing and reduction before relaying the information to the next node in the network. This destination node may likewise be an end-user, a ground station or another satellite in the system. Although some data reduction may be done, information must ow through the satellites. Because of this continuity constraint, every satellite must be able to communicate with at least one other node in the network. For all satellites, the energy conversion system (eg. solar arrays) must provide the energy for the transmission of this information. For \active" systems, the satellite must also provide the energy needed to receive the information in the rst place. These are systems such as radar and lidar that illuminate a target and detect the return. The satellites must transmit a signal with enough energy to make the round trip journey to the source and back. The source adds the information to the signal, but returns only a fraction of the incident energy, depending on its cross-section. Note that under this de nition, and contrary to intuition, communications satellites are \passive" since they only relay received information to a destination node. 44
The volume of demand served by the system is limited by the market (demographics, capture and exhaustion), and by the system capabilities. For information networks, the quantity, quality and availability of the information arriving at the sinks are fair measures of the system's capabilities, and represent the quality of service delivered to the users. Four quality-of-service parameters can be de ned to measure system capabilities1 :
Isolation characterizes the the system's ability to isolate and identify the information signals from dierent sources within the eld of regard. The isolation capabilities of a system determine the level of cross-source interference that is admitted. Multiple access schemes for communication systems are methods of signal isolation. Analogously, the resolution of an imaging system allows spatially separated sources to be isolated.
Information Rate measures the rate at which the system transfers information
between the sources and the sinks. This is most familiarly associated with the data rate for communication systems. The revisit rate is the corresponding parameter for imaging systems. Information must be sampled at a rate that matches the dynamics of the source or end-user. For example, a high speed cruise missile must be tracked with a high sampling rate. Similarly, a GPS receiver on a high-dynamic aircraft must receive information from the satellites at a rate that is sucient to allow navigation solutions to be updated very quickly.
Integrity characterizes the probability of making an error in the interpretation of an
information symbol based on noisy observations. For communications, the integrity is measured by the bit error rate. The integrity of a surveillance radar system is a combination of the probability of a missed detection and the probability of a false alarm, since each constitutes an error.
Availability is the instantaneous probability that information symbols are being
transferred through the network between known and identi ed O-D pairs at a given rate and integrity. It is a measure of the mean and variance of the other capability parameters. It is not a statement about component reliabilities. At any instant, the network is de ned only by its operational components, and so all networks are assumed to be instantaneously failure-free. Should a component fail, the network changes by the removal of that component. Generally, the capabilities of the new network will be dierent to those of the previous network.
Basically, the rate and integrity correspond to the quantity and quality of the information exchanged between a single O-D pair, the isolation measures the ability to serve 1 Chapter
the values.
4 formally de nes these parameters, generalizes the concepts, and describes how to quantify
45
multiple O-D pairs without interference, and the availability measures how well the system does all this, at any particular instant. These quality-of-service parameters measure the capabilities of satellite systems over all likely operating conditions. The actual operating point is set to match the demands of the market that the system is to serve. This demand is represented by a set of functional requirements, speci c to an individual information transfer. The requirements specify minimum acceptable values for each of the quality of service variables. Since the availability implicitly includes a reference to the other characteristics, the requirements simply enforce that, for a speci ed level of isolation, rate and integrity, the availability of service exceeds some minimum acceptable value. Architectures that support capabilities exceeding the requirements of the market are viable candidates for the mission. The degree to which a system is able to satisfy the demands of a market is a critical consideration for system analysis. In fact, the probability of satisfying the system requirements that correspond to the market is the correct measure of system performance. This is sensitive to component reliabilities, since failures can degrade the system such that the new capabilities violate requirements. Architectures that can tolerate component failures without signi cant degradations in the capabilities are good candidates for the mission.
2.3 Satellite System Classi cations The information network representation of satellite systems and the de nition of the four capability parameters supports a generalized classi cation of all satellite systems, distributed or singularly deployed. Categorizing the dierent architectures and identifying those issues and problems characteristic of each class allows immediate architectural decisions to be made for any given mission. Classi cations are therefore necessary that allow system identi cation and highlight the most important system characteristics.
2.3.1 Distribution In Section 2.1, it was pointed out that the level of distribution exhibited by a system is de ned by the cluster size. Although the cluster size is the primary form of system categorization for distributed systems, additional classi cations beyond this are necessary. Specifying a cluster size of 10, for example, says nothing about the way that the satellites coordinate to satisfy the local demand. The rst type of classi cation is therefore based on the level of coordination exhibited by the system elements, and is related to the network architecture. Referring to Figure 2-2: 46
Collaborative
Comsats
ns = 2
- Iridium - Orbcomm - Cyberstar, etc. Remote Sensing - KH-11 - SPOT, etc.
Parallel, uncoupled paths FOV source source
sink
source sink sink
Symbiotic
Junction (symbol is assembled)
+ Remote Sensing - SSI - TechSat21 Navigation - GPS - Glonass
source
sink
Figure 2-2: Classes of distribution for satellite systems
Collaborative
Each separate satellite operates independently and is able to isolate signals satisfactorily. Although an individual satellite addresses a given source (or sink), other satellites (or sensors) may be needed for connectivity across the network. The cluster size can be as low as unity, but may be more if multiple satellites are needed to satisfy rate, integrity or availability requirements for the size of the market that is addressed. The de ning characteristic is that the network architecture consists of uncoupled parallel paths from the set of sources to the set of sinks. Most communication satellite systems are collaborative, because each satellite can support local point-to-point communications, although in some cases rely on the constellation for connectivity across the network. Examples of collaborative remote sensing systems are the commercial distributed imaging systems such as SPOT, OrbImage and Resource 21 [8]. These systems feature constellations of several satellites, each capable of recording images 47
with about 10m resolution. The size of the constellation determines the coverage and revisit-time of the system. Traditional singular deployments are by de nition collaborative.
Symbiotic
The separate satellites cannot operate alone, exhibiting a symbiotic relationship with the others in the system. No single satellite can suciently isolate the signals or transfer information symbols from the sources to sinks. Only by the coordinated operation of several elements can the system perform the design function. The cluster size of symbiotic systems must be greater than unity. The de ning characteristic is that the network architecture features junctions of paths from separate satellites where the information symbols are assembled before delivery to the sinks. An example of a symbiotic system is the proposed separated spacecraft interferometer (SSI) [6]. Here, the signals from two small apertures are combined and interfered to obtain very high resolution images. GPS is also symbiotic since the signals from several satellites are used to assemble a navigation solution within the user receiver.
2.3.2 Architectural Homogeneity The second level of classi cation speci es the level of homogeneity exhibited by the system architecture:
Local Cluster
Some proposed systems involve a local grouping of satellites that are in close proximity. The clusters can be made up of formation- ying satellites or can even involve the physical tethering of satellites. If there is only a single cluster in the system, such as with the SSI, the architecture is simply termed as a local cluster.
Constellations
These are systems that feature a large number of similar satellites in inertial orbits, each with their own unique set of orbital parameters. Walker Delta patterns or Molniya orbits support these types of constellations. Systems such as GPS and Iridium are characterized as being constellations. Cluster sizes greater than unity, can be formed if the constellation supports multiple coverage of target regions.
Clustellations
A system may involve more than one local cluster. Each cluster orbits as a group, and several clusters can be placed in separate orbits. An architecture that utilizes several local clusters is classi ed as a clustellation, since it features a constellation of clusters. 48
Essentially, the cluster is used to satisfy the isolation requirement, while the constellation provides availability by improving coverage. An example of a clustellation is the proposed TechSat21 space based radar (see Chapter 7).
Augmentations
An augmented system has a hybrid architecture featuring primary and adjunct dissimilar components that perform dierent subsets of the mission. The system is designed such that the combined capabilities of the dierent components satis es the overall mission objective. An example of an augmented system would be the combined use of dierent platforms or sensors to perform active and passive surveillance. Within this analysis framework, the Space Based Infra Red Systems, SBIRS Low and SBIRS High, are collectively classi ed as augmented. Another example of an augmented system is the proposed concept of using both unmanned aerial vehicles (UAV's) and space assets for tactical reconnaissance of a battle eld.
2.3.3 Operational A third level of classi cation groups systems according to their operational characteristics. This type of classi cation is the most abstract. The list shown here is by no means exhaustive and covers only some examples of the operational classi cations.
Active or Passive
Remote sensing may be active or passive, with marked dierences in capability and cost. This is primarily due to the additional power requirements needed to overcome the two way attenuation losses associated with active systems.
Track, Search, or Imaging
Tracking targets using staring sensors involves dierent scaling parameters than searching for targets with scanning sensors. The detailed imaging of a static scene diers from either tracking or searching. These dierences are all related to the extent over which the ground must be illuminated or viewed.
Distributed or Concentrated market
The market addressed may involve multiple sources or sinks, distributed over a wide area, or could involve small numbers of sources or sinks concentrated in speci c locations. Conventional communication satellites (Intelsat) serve concentrated sources and sinks. Weather satellites are examples of systems that address distributed sources, and concentrated sinks, while DirecTV broadcast satellites serve concentrated sources and distributed sinks. The proposed mobile communication systems (Iridium, Globalstar, ICO) are characterized by a distributed market of both sources and sinks. 49
As with track versus search, the dierence between a concentrated and distributed market lies in the amount of ground that must be illuminated or viewed. Table 2.1 gives some examples of existing or proposed systems for each class that has been introduced. Table 2.1: Satellite system classi cations
Local Cluster
Constellation
SSI
Comsats (Iridium, etc) SPOT, OrbImage, etc. GPS
Collaborative Symbiotic
Clustellation
Augmentation SBIRS High & Low
TechSat21
Sat+UAV bistatic radar
All satellite systems for missions in communications, sensing or navigation can be similarly classi ed using these dierent categories. If any trends in the capabilities, performance and cost can be found within and between classes, quick decisions can be made in choosing an architecture for a particular mission. This is the subject of the next chapter, which identi es the characteristics of distributed satellite systems.
50
Chapter 3
Distributed Satellite Systems The development of small, low cost satellites oers new horizons for space applications when several satellites operate cooperatively. The vision of what can be achieved from space is no longer bound by what an individual satellite can accomplish. Rather, the functionality can be spread over a number of cooperating satellites. Further, the modular nature of these distributed systems allow the possibility of selective upgrading as new capabilities become available in satellite technology. The goal of this chapter is to highlight the important concepts and issues speci c to distributed satellite systems. This is achieved through a systematic discussion of the bene ts oered by distribution, illustrated with extensive examples of real and proposed systems. This is followed by a description of the problems that are most pertinint to distributed satellite systems, together with suggestions for their resolution. Most of the arguments presented in this chapter are qualitative, based on the generalized characteristics and classi cations introduced in Chapter 2. They are presented to show clearly and fundamentally why distributed satellite systems are worthy of further attention. Chapter 4 will take this process one step further by developing the quantitative tools needed to perform useful system analysis, based on measurable capability, performance and cost. This allows comparative analysis between distributed systems and traditional deployments. Only in this way, with quantitative analysis, can the bene ts of distribution be properly appreciated.
3.1 To Distribute or not to Distribute? There are many reasons why a distributed architecture is well suited to some space applications. Unfortunately, the arguments for or against distribution are fraught with subjectivity and rmly entrenched opinions. It currently seems that most of the satellite design houses in the country are internally split between the proponents and opponents of distribution. Each 51
camp supports one side of the debate vehemently and can nd a seemingly endless stream of supporting arguments to back their claims. The \radicals" claim that the development of large constellations of small satellites leads to economies of scale in manufacture and launch, reducing the initial operating costs. They also expound that the system becomes inherently more survivable due to the in-built redundancy. Conversely, the \traditionalists" debunk these arguments, reminding everyone that you cannot escape the need for power and aperture on orbit, and that building even 100 satellites does not imply signi cant bulkmanufacturing savings. They assert that the lifetime operating costs for large constellations will far outweigh the savings incurred during construction and launch. In fact, most of the statements made by both sides are true, but only when taken in context. Clearly a distributed architecture is not the panacea for all space applications. It is tempting to get carried away with the wave of support that the proponents of distributed systems currently enjoy. Care must be taken to curb this blind faith. Also best avoided is the naive, but commonplace application of largely irrelevant metaphors supporting the adoption of distributed systems; the unerring truth that ants achieve remarkable success as a collective is really not an issue in satellite system engineering! This section summarizes the real reasons supporting the use of distributed satellite systems, and should hint toward the type of applications for which they are best suited. The shortlist given here is probably not complete; there are likely many other reasons one could think of that support or oppose the use of a distributed architecture for some particular application. Rather, this highlights only the most important and fundamental factors that are both relevant to this debate and play a common role in system architecture studies. Stated very simply, in order for a distributed architecture to make sense, it must offer either reduced lifetime cost or improved capabilities compared to traditional singulardeployments. As discussed in Chapter 2, the four parameters of isolation, information rate, integrity, and availability are good measures of the capabilities of a satellite system. A system architecture that oers improvements in any of these parameters should be given serious consideration during the system design. The system lifetime cost accounts for the total resource expenditure required to build, launch and operate the satellite system over the expected lifetime. This includes the baseline cost of developing, constructing, launching and operating the components of the system, and also the expected costs of failure. These additional costs arise from the nite probability of failures occurring that could compromise the mission. Should such failures occur, economic resources must be expended to compensate for the failure. One example would be the cost to build and launch a replacement, while another is the lost revenue associated with a reduced capability. The options to lower the expected cost of failure are to reduce the 52
impact of any failures that do occur, or to lower the component failure rates such that these failures are less likely. As a result, all of the reasons supporting the use of distribution relate in some way to improving the capability characteristics or to reducing the baseline or failure compensation costs. The following sections detail these relationships, and highlight the general trends observed within and between the dierent classes of systems. Chapter 4 takes this process one step further by introducing quantitative metrics based on measurable capability, performance and cost, allowing comparative analysis between many dierent system architectures.
3.1.1 Signal Isolation Improvements The system's ability to isolate and identify signals from dierent sources within the eld of view is a critical mission driver for many applications. In general, dierent signals can be isolated by exploiting dierences in their frequency content, by sampling at times that match the source characteristics, or by isolating spatially separated sources using a high resolution detector. By de nition, each satellite in a collaborative system independently satis es the isolation requirements of the mission. Distribution therefore makes no dierence to the isolation capabilities of a collaborative system. Isolation capabilities can be improved with a symbiotic architecture. The reason is straightforward; by separating resources spatially over a large area, the geometry of the signal collection is dierent for each detector. Combining the received signals can assist isolation of the dierent sources due to eld of view changes, dierent times-of- ight, or dierent frequencies or phases of the received signals. Larger spatial separation of the apertures means that the phase dierence between signals arriving at dierent detectors is increased, further separating the sources. This is best demonstrated with an example:
Example 3.1 Isolation Improvements and Spacecraft Arrays
The advent of economical, fast integrated-circuit technology has recently surpassed the previously restrictive data processing requirements of forming large sparse and synthetic apertures in space. Many people have now started to claim that their use oers potential bene ts by reducing the mass and cost of remote sensing systems for high resolution imaging. The angular resolution of any aperture scales with the overall aperture dimension, expressed in wavelengths (). That is,
D
where D is the size of the aperture. An array is an aperture excited only at discrete points or localized areas. The array consists of small radiators or collectors called elements. An array with regularly spaced 53
elements is called a periodic array. To avoid grating lobes in the far- eld radiation pattern, the elemental spacing of a periodic array should be less than one-half of the wavelength. A random array is a thinned array with random positions of the array elements. The spacing of the elements is usually much larger than one-half of the wavelength, leading to fewer elements for a given overall aperture dimension. Grating lobes are avoided because there are no periodicities in the elemental locations. The concept of the spacecraft array involves forming a large thinned aperture from a set of satellites, each acting as a single radiator element. Since the spacing between satellites is very much greater than characteristic wavelengths , grating lobes can be avoided only by positioning the satellites to avoid periodicities. This can be done by a random placement of satellites, or by arranging them such that their relative separations are prime [9]. The resolution of sparse arrays can be very much larger than an equivalent lled aperture. This arises from the enlarged overall array dimension resulting from splitting and separating the aperture into elements. Consider the case of an imaging system capable of 1m resolution at a wavelength of 0.5m (green-visible). A geostationary satellite would require diraction-limited optics 18m across. Similar resolution for lower frequencies (X-band etc.) requires even greater aperture sizes. This is clearly impractical for lled aperture systems. A lled aperture must be supported over its entire extent, leading to heavy structures. Even if mass can be kept low through the use of advanced materials, impressive deployment techniques would be required to stow such an antenna within the launch shroud. The question arises as to how big a lled aperture can be built and launched. A sparse aperture can be made very large indeed, the only requirement being that the signal at each aperture be known, with measured and preserved phase. Widely separated elements connected through light tethers or booms could easily extend over length scales of 10-100m [10]. For even larger baselines, a sparse array of separated spacecraft allows resolutions in the sub-milliarcsec range.
3.1.2 Rate and Integrity Improvements For many applications, the requirements for a high information rate or high integrity drives the designer toward very large apertures and high power payloads. The probability of correctly detecting information symbols in the presence of noise is a function of the energy in each information symbol. Collecting or transmitting symbols at a high rate with a low probability of error therefore requires high power signals. This in turn leads to high power transmitters or large apertures to collect more power or to concentrate the power radiated. Distributed systems can oer large improvements in both rate and integrity compared to singular deployments by reducing the impacts of noise and interference. The interfering 54
noise can arise from several sources:
Thermal noise from resistive heating of electrical components in the receiver Noisy radiation sources in the eld of view (FOV) of the instrument Jamming from unfriendly systems Interaction with the transmission medium (rain, bulk scatterers) Background clutter The ergodic property of thermal noise means that integrating over multiple detectors gives the same processing gain as integrating the signal-plus-noise from a single detector over time. The advantage is that there is no penalty in rate. Also, the interference from noisy radiating bodies in the FOV of one satellite may not be an issue for a second satellite due to the diering viewing angle of the scene. Jamming interference may also be satellite speci c; an enemy can easily disrupt a single satellite but would struggle to jam an entire group of satellites that may be spatially separated. In general, the level of improvement in rate and integrity that is oered by distributed architectures varies across the dierent system classes.
Rate and Integrity Improvements for Collaborative Systems For a given level of integrity, a collaborative system can achieve higher rates by summing the capabilities of several satellites that individually operate at modest rates. This is equivalent to division of the top-level task into smaller, more manageable tasks that can be allocated among the elemental components of the architecture. The responsibilities of each satellite in the collaborative system reduce linearly with the number of satellites in the cluster. Each satellite can allocate more of its resources to each source, satisfying higher rate requirements. Increasing the number of satellites in the cluster yields linear increases in the achievable rate of information ow from each source. The limit is reached when each satellite is dedicated to a single user. The maximum rate for that user would then be the maximum supportable rate of the satellite. Equivalently, at a given rate, the level of integrity increases with the number of satellites in a collaborative cluster. The energy per symbol Es increases with the number of satellites, as a result of the increased dwell time allowed by the task division. If each satellite coherently integrates the received signal, linear increases in the dwell time result in linear increases in the energy per symbol to noise density ratio (Es =N0). The integrity will then improve almost exponentially since the error probability can be approximated as an exponential function of Es =N0. 55
In actuality, the integrity improvements will not be quite this good, since this result assumes the detection is limited only by stationary white noise. Linear integration gains are achieved only with coherent integration of a signal plus random noise. This is the case if the dominant interference source is thermal receiver noise. Unfortunately, any correlated interference experiences the same linear gains during the integration process. This is a critical consideration for active systems where clutter returns from the ground are not at all suppressed by time integration. For this reason, collaborative distribution cannot improve the clutter-limited performance of radar or lidar systems over that achievable with singular deployments.
Rate and Integrity Improvements for Symbiotic Systems Unlike collaborative systems, a symbiotic architecture does not, in general, give a simple linear improvement in rate capabilities with increases in the numbers of satellites. In fact, the relationships between the number of satellites and the achievable rate and integrity are dierent for passive systems and active systems. Consider rst passive symbiotic clusters that form sparse receiver arrays. The SNR behavior of sparse arrays is identical to a lled aperture of the same physical collecting area. To show this, consider a cluster of ns satellites, each with aperture A. Assume the array is illuminated by a distant source. Each satellite measures the radiation eld and the signals from the dierent satellites are then combined to deliver a single waveform to a detector. In the most general case the input signal power varies across the array. Assuming unit impedance throughout, the average signal power is, "
ns # X 1 E [Si] = E n s2ij s
(3.1)
where E [] is the expected value, the subscript i refers to the input side of the array, and sij is proportional to the envelope of the RF signal voltage for the j th satellite. Unless the array is so large that the signal strength varies across the array due to path length dierences, the signal strength across the array will be constant. In this case, sij = sik = si , and the signal power per satellite is Si = s2i . Assuming that all signals are cophased by a bank of phase shifters, the output signal voltage after integration is ns si and the output signal power of the array S0 = n2s Si , since all the signals add in phase. If the dominant noise source is thermal noise, then the noise input at each of the satellite apertures will be independent, with zero-mean. The average input noise power will be, "
# ns X 1 2 E [Ni ] = E n nij
s
56
(3.2)
The noise does not add in phase since it is uncorrelated and so the output noise voltage after combining is given by, ns X n0 = nij (3.3) and its square is the output noise power:
N0 = =
ns X ns X
nij
!2
n2ij +
(3.4)
ns X ns X j
k
nij nik
(3.5)
Since the noise sources are independent with zero mean, the second term is zero, leaving the average noise power to be E [N0] = ns E [Ni]. The output SNR is therefore equal to: SNR = S0 = ns Si = ns (SNR)1 E [N0] E [Ni]
(3.6)
where (SNR)1 is the signal to noise power ratio for a single satellite of the cluster. The improvement in SNR compared to a single satellite is therefore ns , the number of satellites in the array. The same SNR is achievable with a lled aperture of area ns A receiving the same signal and with the same average thermal noise temperature. This of course makes sense, since the same amount of energy is collected over the same collection area in both cases. Even larger bene ts in SNR can be obtained with active symbiotic systems. Recall that active systems are de ned to be systems that have to provide the power for the signal to make the round-trip journey to the information source. The active system may have several transmitting satellites that illuminate the source. If the transmitters radiate coherently, the power incident upon the information source is increased quadratically, since the signal amplitudes add. Alternatively, if the transmitters radiate independently, the power at the source sums linearly. The incident power is then re ected back to be collected by the cluster of satellites, with the receive characteristics described above. The resulting SNR improvement for the symbiotic system compared to a single satellite is given by, (SNR)sym = n2t nr (SNR)1
(3.7)
where nt is the number of coherent transmitters, nr is the number of receive channels. Note that nr can be greater than ns , the number of satellites. If each of the ns satellites transmit incoherent but uniquely identi able signals, and each satellite receives all ns transmissions, a total of nr = n2s dierent signals can be coherently integrated. This is the operating principle behind Techsat21, the Air Force's most recently proposed space based radar concept. 57
Chapter 7 features a complete quantitative analysis of this system. The integrity is a function of Es=N0 , given by multiplying the SNR by a dwell time corresponding to the duration over which the signal is integrated. For a tracking mission, the symbiotic cluster must cycle through all of the the targets one at a time, so there is no dierence in the dwell time compared to the single satellite case. For a search mission however, there is a penalty paid for coherence. The beamwidth scales with the overall synthetic aperture dimension as opposed to the physical aperture size of each satellite. For a given area coverage rate, the symbiotic cluster must scan its smaller beam more quickly, s tsym = D D t1
(3.8)
c
Here tsym is the dwell time for a linear cluster of extent Dc , comprised of satellite apertures of size Ds , and t1 is the maximum dwell time for a singularly deployed satellite. For coherence only on receive, multiple receive beams can be formed simultaneously to ll the satellite FOV. The dwell time then scales the same as that of a single satellite. The resulting Es =N0 relationships for both the search and track missions are summarized in Table 3.1. To simplify the results it has been assumed that the symbiotic cluster has ns satellites, and can operate in three dierent modes; a passive mode in which all ns satellites are used to form a coherent receive array; an active mode in which each satellite independently transmits (incoherent) and all the satellites coherently receive all the signals, (nr = n2s ); and a coherent transmit and receive mode, (nt = nr = ns ). Table 3.1: Factor of improvement in the energy per symbol to noise density ratio for
distributed clusters compared to singular deployments Collaborative Symbiotic Passive Active Passive
Active Active Coherent Rx Coherent Tx/Rx
Search
ns
ns
ns
n2s
(Ds =Dc ) n3s
Track
ns
ns
ns
n2s
n3s
Interestingly, collaborative and symbiotic clusters both achieve linear improvements for passive missions, but for quite dierent reasons. The collaborative system gains bene t from task division, increasing the allowable dwell time, while the symbiotic cluster achieves the same linear improvement by increasing the SNR. Notice that the symbiotic system with coherence on transmit and receive is not well suited to the search mission unless many satellites can be deployed over a reasonably short extent, such that n3s > (Dc =Ds ). The largest and most realizable bene t from distribution for the search mission can be gained with several independent transmitters and coherent 58
integration of all the received signals. This is the reason supporting the development of the Techsat21 space based radar program. Of course, in the presence of a heavy clutter background, the detection is not noiselimited and the results change somewhat. However, this is where symbiotic clusters can really help. As a direct result of the smaller beamwidths that are characteristic of symbiotic systems, the clutter rejection of the system is greatly improved compared to single satellites or collaborative systems. Consequently, the improvements in the Es =N0 seen in Table 3.1 are conservative estimates of the bene ts oered by symbiotic clusters. The tempting conclusion to draw from this is that symbiotic clusters are bene cial for missions requiring high rates and integrity. Unfortunately, there is a crucial factor that has been omitted so far. The data processing requirements placed on symbiotic systems are extremely restrictive, and are on the frontier of what can be achieved with today's technology. This issue will be discussed in a later section.
3.1.3 Availability Improvements If carefully designed, a distributed architecture can often lead to improved capabilities by increasing the availability of system operations. Losses of availability can arise due to increased numbers of users accessing the limited resources of the system, signal attenuation from blockage or weather, or from statistical uctuations due to noise and clutter. More commonly, a loss of availability can also be attributed to poor viewing geometries or poor coverage statistics. For example, reconnaissance satellites may have to image scenes over two or more continents, relaying the data to multiple downlink stations across the world. There will be times during the orbits of these satellites when they are not passing over important targets. The system is unavailable at these times since images of the targets cannot be recorded. The revisit time of the satellites eectively speci es the maximum availability that is built-in to the system. Of course, very high availabilities can only be achieved by constellations giving continuous coverage over the target regions. Note that the availability of a system is related to the variance of the supportable isolation, rate and integrity, and as such is sensitive to worst-case scenarios. Since a loss of availability represents an inoperative state, any measures that can be taken to improve the availability of a system are desirable. As illustrated in Figure 3-1, distributed architectures can lead to increased availability through:
better coverage of the demand, or reducing the variability of the capabilities 59
The methods by which distribution can lead to these improvements are described in detail in the following sections. Coverage
Improved mean value
Reqt. Coverage requirement derived from minimum acceptable capabilities Reduced variability
Availability = % of time that reqt. is exceeded
Time
Figure 3-1: The coverage improvements oered by distribution leading to increased
availability
Matching a Distributed Demand Some applications require the reception of signals at many dierent locations. Such applications are characterized as having a distributed demand. A worldwide communications consumer base, or sampling locations for a global mapping of the geomagnetic eld are examples of a distributed demand. The architectural options for these applications are to place sensors everywhere there is a demand, to have a single sensor that maneuvers to the demand locations, or to adopt some strategy somewhere in between these extremes. The trade is between the cost of additional hardware resources and the cost of additional expendables, such as fuel and time. A system with a few satellites that can maneuver to dierent sampling locations (either by thrusting or by utilizing orbital mechanics) requires less dry mass on orbit, possibly leading to lower costs. However, the additional cost of fuel, or the opportunity cost associated with the loss of availability due to sequential sampling may sway the balance in the other direction. A question presents itself: how should spacecraft resources be distributed to best match a distributed demand? The answer to this question is, unfortunately, neither simple nor general. The best option for one application may be unsuitable for another. There are, however, some general trends. Clearly, a distributed collaborative architecture is the only option for applications requiring simultaneous sampling at all demand locations. This is equivalent to a continuous coverage requirement. Consider, for example, a global mobile communication system. A 60
single satellite cannot serve the entire globe, forcing the designer toward a constellation of satellites that can guarantee continuous coverage. Some applications involve a coupling between measurements at dierent sample locations, especially during processing of the information. An example of this coupling is the combining of signals collected by the apertures of an interferometer, an essential operation in the construction an image. This sharing of information necessitates interfaces between the satellites of the system that are expensive and add complexity. Furthermore, the transmission of information between satellites requires energy expenditure. Electrical energy, like propellant, is a valuable expendable resource. This is especially true for satellite systems relying on non-renewable energy sources (batteries, fuel cells, etc). For these applications, if sequential sampling can be tolerated, the savings in hardware and the reductions in complexity associated with fewer satellites can oset the opportunity costs associated with losses of availability. Some tasks involve no coupling between the dierent sampling locations. In this case, the processing of signals at the dierent locations can be performed independently. Separate, independent sensors can satisfy the demand without having to interface among themselves. Without any of the energy costs or complexity of intersatellite links, a very distributed architecture may be favorable for these applications, the improvements in availability outweighing the costs of extra on-orbit hardware. This can be seen in the following example.
Example 3.2 Matching a Distributed Demand: The Separated Spacecraft Interferometer [11]
Optical interferometers collect light at widely separated apertures and direct this light to a central combining location where the two light beams are interfered. Fringes produced by the interference provide magnitude and phase information from which a synthesized image can be generated. Space-based optical interferometers can be implemented as single spacecraft, featuring collecting apertures separated by tens of meters, or as separated spacecraft where baselines of hundreds or thousands of meters enable measurement with sub-milliarcsec angular resolution. The collector spacecraft sample the distant starlight at several dierent baselines (separation and orientation) in order to construct the image. The locations of the sampling points de ne a distributed demand. Clearly then, a possible modi cation to the basic con guration that could oer improved availability is a system with an increased number of collectors. By distributing the collectors at the desired sampling locations, many dierent baselines can be made from the numerous combinations of collector pairs. In this way, many baselines can be measured simultaneously (or at least without additional maneuvers) and the image can be lled out more quickly. 61
The m-point Cornwell distribution [12] describes m sampling locations that are wellknown to give high quality snap-shot interferometric images of intersteller objects. If ns apertures are used to sample these locations, such that ns < m, a compound image can be constructed from dierent snap-shot images formed by moving the ns apertures to all combinations of the various Cornwell imaging locations. Obviously, sampling from a distribution with a larger number of Cornwell points results in a higher quality image. Unfortunately, sampling at more locations requires either more collectors or more maneuvers. A system with more collectors requires fewer maneuvers to sample at all pairs of Cornwell points. In order to sample all pairs of points from an m point Cornwell distribution, a system consisting of ns collectors must make (m-Choose-ns , 1) separate maneuvers. In choosing the system size, a trade is therefore made between the cost of additional collectors and the cost of propellant for maneuvers. For this calculation, the system cost can be represented as total system mass; this is a reasonable approximation to rst order, and allows the important trends to be seen. The total mass is the sum of the total dry mass and the total propellant mass. Kong [11] used a Monte Carlo simulation to determine the optimal maneuver sequence for ns satellites sampling an m-point Cornwell con guration to minimize the propellant required to construct 500 images within a 15 year lifetime. The resulting system mass for dierent sized clusters and dierent numbers of imaging locations are shown in Figure 3-2. For low quality images with a small number of Cornwell points, it is more ecient to have only two collectors that maneuver frequently. However, for greater than 10 imaging locations, increasing the number of satellites reduces the propellant enough to outweigh the dry-mass penalty. The Cornwell-11 has a minimum system mass with three collector satellites, while the Cornwell-12 is best implemented with four collectors. Unfortunately, the results of Figure 3-2 do not re ect realistically achievable propellant mass-fractions. Figure 3-3 shows the average propellant mass-fraction that would be required of satellites designed to implement the systems considered in Figure 3-2. Restricting the maximum propellant mass-fraction at 25%, it can be seen that the two satellite cluster cannot image Cornwell con gurations with more than nine points. The realistically achievable optimum number of spacecraft for the 10 point Cornwell con guration is therefore three satellites. Similar trends, with constrained optimums between increased hardware or more expendables, are seen in a wide range of applications involving a distributed demand. 62
2000
Mass (kg)
1500
1000
500 12
11
10
9
8
7
No. of Cornwell Points
6
5
4
2
6
8
10
No. of Collector S/C
Figure 3-2: System mass of a separated spacecraft interferometer required to form 500
images over a 15 year lifetime using dierent Cornwell con gurations and dierent cluster sizes [11]
Improved Visibility and Coverage Geometry There are some instances when distribution and multifold coverage improve the availability by reducing the variability of the system capabilities. By making the behavior of the system more predictable, the probability of operating within acceptable bounds is increased. The capabilities of the system are particularly sensitive to coverage variations, and it is here that distribution can lead to improvements. The multi-fold coverage that is characteristic of distributed architectures supports consistent capability in two ways:
Reducing the variance of the visibility, de ned as the number of satellites in view from a ground station. Generally, the visibility is a function of both space and time. The number of satellites in view from a location changes in time, and is usually dierent at other locations. The capabilities of a satellite system are frequently dependent on the visibility. Large variations in visibility can therefore cause large uctuations in the isolation, rate or integrity. The designer faces the choice of sizing the system for the worse-case coverage, or accepting losses of availability at times when the visibility is below average. Increasing the number of satellites in the constellation not only increases the visibility, but also reduces the variance. According to the Central Limit 63
% Propellant/Spacecraft (%)
150
100
25 %
50
0 2 4 6 8 10 No. of Collector S/C
5
6
7
8
9
10
11
12
No. of Cornwell Points
Figure 3-3: The propellant mass fraction for the satellites of a separated spacecraft
interferometer required to form 500 images over a 15 year lifetime [11].
Theorem, as the number of satellites is increased, the minimum visibility converges toward the average value. This assists the designer, improving the availability of systems based on average coverage characteristics.
Reducing the impact of the variability in the capabilities of individual satellites in col-
laborative systems. The geometry of the coverage over target regions can have a large impact on the sensitivity of the system. Frequently, the isolation, rate or integrity that can be supported by a single satellite can be spatially and time varying, depending on the viewing angle, the transmission path, and the detector characteristics. Favorable coverage geometries minimize the impact of these variations, ensuring that the combined operation of the collaborative con guration achieves consistent levels of capability.
These two concepts are most easily understood with the help of a simple example.
Example 3.3 Visibility and Geometry: Distributed Space Based Radar [7]
Consider a collaborative space based radar system consisting of a cluster of satellites in common view of a theater of interest. The system satis es requirements on the rate (target update) and integrity (probability of detection or false alarm) by summing the capabilities 64
of several small radars that independently search the same target area. The cumulative rate is therefore directly proportional to the number of satellites in view of the target area, as indicated in Table 3.1. Variations in the visibility translate directly into variations in the achievable rate or integrity. This can result in a loss of availability if the visibility drops below that necessary to support the requirements. The availability can be improved if the system is designed to use an even greater number of smaller satellites to satisfy the detection requirement. As the number of satellites increases, the spatial and temporal variations in the visibility are reduced. The minimum visibility approaches the average value, and the achievable detection rate changes over a much smaller range. Furthermore, larger con gurations of satellites result in more favorable coverage geometries. The multi-fold coverage leads to a wide distribution of viewing angles surrounding the target. This is particularly important for slow moving targets. The radar return from slow moving targets is dicult to distinguish from the ground clutter. Normally the dierent velocities of the target and the ground relative to the radar give rise to dierent Doppler shifts that separate the target and clutter in frequency, allowing detection. The return from slow moving targets is often buried in the clutter because of the low relative velocities. A viewing angle parallel to the target's velocity maximizes the Doppler shift between the target and the ground in the frequency spectrum, increasing the signal isolation and improving the probability of detection. Since the target's velocity vector is unknown a priori, receivers must be placed at all possible viewing angles to ensure detection. With receivers located at all angles around the target, the distributed space based radar concept increases the probability of detecting slow moving targets. This makes the system less sensitive to the target velocity vector, eectively increasing the availability by reducing the probability of failing the detection requirement.
3.1.4 Reducing the Baseline Cost Initial deployment costs for a given satellite constellation include costs associated with development, production, and launch of the system's original complement of satellites. Additional expenditures beyond the initial deployment costs are necessary to maintain the constellation over a given time period. These costs include the production, storage, and launch costs associated with the on-orbit or on-ground spares, and also all of the operations costs. The baseline system cost is the sum of the initial deployment costs and the maintenance costs. Baseline costs are typically very high. For distributed satellite systems to be considered viable, they must be at least competitive in cost, as compared to traditional systems. Conventionally, system cost estimates can be made using basic parametric models such 65
as the USAF Unmanned Spacecraft Cost Model (USCM), or the Small Satellite Cost Model (SSCM) [13]. These models consist of a set of cost estimating relationships (CERs) for each subsystem. The total cost of the system is the sum of the subsystem costs. The CERs allow cost to be estimated as a function of the important characteristics, such as power and aperture. Frequently they are expressed as a power law, regressed from historical data. For example, the USCM estimate for the theoretical rst unit (TFU) cost of an infrared imaging payload is based on aperture and is shown in Figure 3-4 140000
Cost (FY92$K)
120000
100000
80000
60000
40000 0.2
0.4
0.6
0.8
1
1.2
Aperture (m^2)
Figure 3-4: The USCM Cost Estimating Relationship for IR Payloads Care must be taken in applying the SSCM to distributed systems. Although each satellite in a distributed system may be small, the SSCM was derived assuming single-string designs and modest program budgets. This is clearly unsuitable for a distributed system of perhaps 1000 satellites, with a total system cost of several billion dollars. Unfortunately, the use of USCM generally leads to high costs for distributed systems. This is due to two factors:
In partitioning the mission and allocating tasks among separate components, the total hardware resources required on-orbit are often increased. Among other things, this is a result of having to add redundancy to overcome serial reliability problems. Consider the case of single satellite satisfying a demand with reliability of 0.9 over the mission lifetime. To achieve the same overall reliability with two collaborative satellites of half the size, an additional redundant satellite is also necessary. In this example, the total resources on-orbit for the distributed system is 50% more than for the single deployment. Since the CERs base cost on characteristic resource, the result of this 66
increase of total hardware is an increase in cost.
Typically, the USCM power laws in the CERs are nonlinear, with an exponent less than unity. There is a higher marginal price per kg of mass, or per m2 of aperture for smaller systems. Figure 3-4 demonstrates this trend. As a result, it is more expensive to divide a large system (especially aperture or power) into smaller components.
It would appear then that distributed satellite systems are characteristically more expensive than singular-deployments. However, there are additional factors that can sway the balance in favor of distribution. First of all, there is a question as to the validity of using the USCM for estimating the cost of modern distributed satellite systems. The basic problem here is that the model is based on regression from historical data of past military satellite programs. As such, the CERs of the USCM may not re ect modern trends or practices. The programs from which the model was derived were not subject to the same budget constraints as modern systems. Stated simply, past military satellite programs were expensive because they were allowed to be. An additional point is that conventional cost models, being based on historical data, re ect an industry that was crippled by a conservatism and a reliance on risk avoidance. The high baseline cost of space systems was perhaps the largest reason for the conservatism. The enormous initial expenditure, added to the characteristically high risk, lead to a reliance on tried and tested practices and established technologies. Unfortunately this doctrine was self-supporting, being usually more costly than modern alternatives, and thus serving only to refuel the conservatism. There is however, some indications that things are changing. The advent of small satellite technology has heralded a new era of satellite engineering that minimizes costs by risk management rather than risk avoidance [14][15]. A willingness to accept some risk can lower the cost of satellite programs, enabling more missions to be own and allowing new technology and innovative techniques to be implemented [15][13] [16]. The use of commercialo-the-shelf (COTS) technology can lead to substantial cost savings in development and operations (legacy systems often require specially trained operators). By accepting high risk and implementing strategies to manage failures, small satellites have been successfully designed, built and operated at a fraction of the cost of traditional systems [17]. Should distributed satellite systems really proliferate the market, they will achieve low costs by lowering the requirements on individual satellite reliability, taking advantage of the redundancy built into the architecture. The changes in the space industry have not been restricted to the small satellite arena. The commercial satellite industry is just now beginning to realize the bene ts of modernized design practices. Moving away from the concept of the \hand-crafted" satellite, 67
Hughes Space and Communications and Lockheed Martin are enjoying enormous savings from adopting the \production-line" approach to satellite design and construction. Standardized bus designs with modular interfaces to many dierent payloads reduce the development time and simplify assembly and test. Recent developments in commercial distributed satellite systems (Iridium, Ico, Orbcomm, etc.) re ect this production-line approach to satellite manufacture, and are reporting cost reductions that were previously unheard of in the satellite industry. Whereas the CERs of the USCM assume a single-string design, favorable economies of scale can result from bulk manufacture. The production of a larger number of small units allows quicker movement down the learning curve. Lockheed Martin was apparently observing a 15% discount rate in the production of the 66 satellite Iridium system. This is made possible by economies of scale in manufacture and by modifying the way that satellites are built and assembled. An example here is that Lockheed requires that the subcontractor (Raytheon Inc.) for the main mission antennas of the Iridium satellites perform full subsystem testing prior to delivery of each unit. No further testing is done until full integration. Components that fail the integration test are returned to the manufacturer, and a new antenna is taken from the storeroom. This has greatly reduced costs and assembly time. Such practices are poorly represented by existing cost models. The cost of launching a satellite system can make up a signi cant portion of the baseline costs. This is especially true of distributed satellite systems featuring many small satellites. Typically launch costs do not scale linearly with mass. The price per kg is higher for lower mass payloads. Unless bulk-rate contractual agreements can be made with launch providers, learning curve discounts do not apply to launch costs. This would suggest that the launch costs of distributed systems is greatly increased compared to traditional singular deployments. However, although each satellite in a distributed system may be small, when considered as a whole, the entire system can be huge. Economies of scale support the larger launch vehicles, and so, subject to volume and orbit constraints, it is cheaper to deploy the initial constellation using large launch vehicles. An entire orbital plane of satellites could be deployed on a single launch, giving the added bene t of distinct performance plateaus. The initial launch costs of distributed systems therefore scales more like that of large satellites, and should be priced based on the total constellation mass rather than on the individual satellite mass. Note that replacement satellites (for system augmentation or for compensation of failures) can be launched on dedicated small vehicles, such as Pegasus or Scout, or as secondary payloads, utilizing the spare launch capacity on larger boosters. The cost associated with this replacement scales more like that of small satellite launches. Collaborative distributed systems also oer the possibility of being able to ramp up the investment gradually, in order to match the development of the market. Only those satellites needed to satisfy the early market are initially deployed. If and when additional demand 68
develops, the constellation can be augmented. The cost of constructing and launching these additional satellites is incurred later in the system lifetime. Due to the time value of money, the delayed expenditure can result in signi cant cost savings. Each of these factors help to oset the apparently high costs suggested by conventional parametric cost modeling. Consequently, the baseline cost associated with a distributed satellite system may actually be smaller than for a comparable large-satellite design. This is not always true, being extremely sensitive to the application. Some missions are more suited to distribution than others. An example of a mission that is well suited to distribution is passive infrared imaging of the Earth, as shown in the following example.
Example 3.4 Baseline Costs for a Distributed Infrared Imaging System
For mid-wavelength infrared (IR) payloads on low altitude satellites, the payload costs scale with the resolution and the swath width of the instrument. Small swaths require less expensive satellites, but require more of them. The eect of these scalings can be quanti ed. The payload cost for a single mid-wavelength infrared satellite is the sum of the costs of the optics, the focal plane array of electro-optic detectors, and the computational resources needed to process the image. Canavan [18] suggests that the cost of the optics for instruments of this type scale with volume rather than area. The volume of an optic scales as D2f , where D is the aperture diameter and f is the focal length. To achieve a resolution of x meters, the aperture size for diraction limited optics is,
D = rmax x
where rmax is the maximum range to the target. For a satellite at a low orbital altitude h, covering a swath of half-width W , the slant range is given by, q
rmax = (W 2 + h2) and may be dominated by the cross range component. For a constellation of satellites, the swath width of the instrument is dependent on the number of satellites in the constellation and the revisit time required of the system. Small revisit times require more satellites and larger swaths. The revisit time T for a constellation of N satellites is given by, 4R2e T = 2zV WN
where Re is the Earth's radius, V is the along-track velocity of the satellite, and z is a constant ( 3) that depends on the extent and uniformity of coverage in latitude [18]. Inverting this relationship gives the swath half-width in terms of revisit time and the constellation 69
size. The focal length of the optics is related to the resolution requirements and to the size d of the IR detectors that are available, d f = h x
Smaller detectors lead to smaller focal lengths, and a great deal of eort has been expended in trying to shrink IR detectors. Currently, several commercial detectors are available in the mid-wavelength band with sizes ranging from 17-100m. This gives the cost of the primary optics as,
h32d 1 + Wh22 Optics Cost = a x3 where a is the cost density ($/m3). Canavan [18] suggests that $10M/m3 is a reasonable
cost density for modern optics. To be conservative, let us assume the optics cost an order of magnitude more than this, a =$100M/m3. The cost of the focal plane array (FPA) scales directly with the number of detectors in the focal plane. This is dependent on the swath width, and on the dwell time requirements of the detectors. Long dwell times mean that the detectors cannot be scanned as quickly, and more detector elements are needed. The dwell time td can be calculated from the required sensitivity of the IR device. This is measured by the noise equivalent temperature dierence, NE T , which quanti es the minimum detectable change in apparent scene temperature from one pixel to the next during a scan [19], () NE T = C2p t
d
where C () is a function of wavelength for each detector. For HgCdTe detectors with d = 40m, C (3m) = 1:3 10,12. An NE T of approximately 0.5K is considered a good IR system. By inverting this relationship, the dwell time can be calculated. This then allows an estimation of the number of detector pixels in the instantaneous eld of view,
NFPA = NxPNy td
where Ny is the number of pixels scanned in the along-track direction over an orbit, Ny = 2Re=x and Nx is the number of pixels scanned across-track, Nx = 2W=x and P is the period of the orbit, in seconds. The cost of the FPA is then calculated assuming a cost of $1 per pixel, based on current levels of technology [18]. The computation costs scale with the number of instructions that must be carried out 70
each second. This is equal to the product of the number of pixels across the swath width of the instrument (Nx), and the rate at which they are crossed (V=x). A 100 MIPS computer can now be own for about $100K, and so the computation cost density is approximately $0.001 per instruction per second [18]. The total payload cost is the sum of the costs of the optics, the FPA and the computation costs. The bus costs can be estimated by assuming a 20% payload mass fraction and a constant $77K per kg of mass. This payload mass fraction represents a compromise between that of a typical large satellite (30%) and of a small satellite (10%) [20]. The payload mass is needed for this calculation and is estimated by assuming an average mass density of the optics of one gram per cm3 with an additional multiplicative factor of 2 to account for some extra margin [19]. The total constellation cost can then be estimated by summing the costs for optics, FPAs, computers and dry mass for each satellite, and multiplying by the number of satellites in the constellation. A discount factor to account for an expected learning curve must be applied, depending on the number of satellites produced. The discount factor is assumed to be 5% for less than 10 satellites, 10% for between 10 and 50 satellites and 15% for more than 50 satellites [3]. Launch costs do not have to be calculated because, as discussed earlier, they should scale only with total mass on orbit. Since we already account for total dry mass, adding launch costs only alters the total system cost by a constant amount, without altering the trends. Incorporating these equations in a spreadsheet allows us to examine the eect of constellation size on just the recurring hardware cost, for various dierent orbital altitudes. Figure 3-5 shows this relationship for a system with the following parameters:
Revisit time, T = 25 minutes Ground resolution x = 30m NE T = 2 K HgCdTe detectors, tuned to 4m, d = 40m The hardware cost curves exhibit a minimum at a given amount of distribution. Increased constellation sizes re ect a separation of the overall task among more components, reducing the swath that each satellite is responsible for imaging. There appears to be an optimum swath corresponding to the level of distribution at the minimum point in the curves for each altitude. The existence of the optimum swath is a direct result of the quadratic nature of the optics cost with swath, and the hyperbolic relationship between swath and the number of satellites in the constellation. Neglecting learning curve eects, the total optics 71
-
20
40
60
80
100
120
140
160
180
200
0
20
60
Number of Satellites
40
80
200km 400km 600km 800km
100
72
If the proposed microsatellite systems become a reality, the current costing paradigm
Value Notes Satellites 50 Orbital altitude 400 km 200km too low due to drag Revisit time 25 mins Requirement Resolution 30 m Requirement Aperture diameter 6 cm D = rmax =x Focal length 55 cm f = h (d=x) Payload mass (per sat) 4 kg 1 gram/cm3 with 100% margin Dry mass (per sat) 20 kg 20% payload mass fraction
Table 3.2: Distributed infrared imaging system parameters
costs over the system therefore scales as (N + 1=N ). Constellations with fewer satellites than optimum feature larger swaths, and consequently larger costs for optics, FPA and computation. Systems with more satellites than the optimum have increased costs because the swath does not decrease fast enough to oset the increasing costs of more satellites. This is a good example of when distribution can lower the baseline costs. However, if the revisit time is increased to 60 minutes, the bene ts of distribution begin to diminish. This is shown in Figure 3-6. For revisits of longer than an hour, distribution incurs a cost penalty. This is because the swath for long revisit times does not need to be very big for a constellation of any size, and a large distributed system has too much wasted resource. A candidate architecture can be chosen from these curves. The system parameters for a viable, low cost architecture are shown in Table 3.2.
Figure 3-5: Recurring hardware cost versus constellation size for a distributed infrared imaging system with a 25 minute revisit time
Total Constellation Cost ($M)
-
20
40
60
80
100
120
140
160
180
200
0
20
60
Number of Satellites
40
80
200km 400km 600km 800km
100
73
In addition to the baseline costs, expenditure is necessary to compensate for any failures that cause a violation of requirements during the lifetime of the system. Expected failure compensation costs can be minimized by lowering the probability of failures, by reducing the impact of failures so that compensatory action is not needed, or by reducing the cost of that action. Clearly the expected failure compensation costs are closely related to the overall reliability of the system. System reliability can be improved by deploying redundancy, or by improving the quality of the components. Both of these options add to the cost of the system. Generally, a larger initial expenditure in improving the system reliability leads to smaller compensation costs. Note that distribution can improve reliability only if there is redundancy in the design. A distributed architecture with total resources that can only just satisfy the demand is a
3.1.5 Reducing the Failure Compensation Cost
will change completely. Cost models that scale with unit cost, modi ed only by a learning curve, are not really applicable to microfabrication or batch processing techniques. The microfabrication of solid-state components involves huge production runs, and so the cost is reasonably insensitive to the actual number produced, being dominated by start-up costs. An interesting caveat to be considered here is the increased component reliability resulting from mass-manufacture. As a result of the manufacturing process, mass-manufactured products have a very low variability in production standards and therefore have a characteristically high reliability.
Figure 3-6: Recurring hardware cost versus constellation size for a distributed infrared imaging system with a 1 hour revisit time
Total Constellation Cost ($M)
serial system and is subject to serial failure modes. Under these conditions a failure in any component will lead to a failure of the system. The system reliability would be the product of the reliabilities of the components, decreasing geometrically with the number of serial components. Only by adding redundancy can a distributed architecture take advantage of parallel reliability. System failure of a redundant architecture occurs only if all parallel paths fail. In general, most architectures will require some redundancy to satisfy reliability requirements throughout the expected lifetime. Frequently the cost associated with this redundancy is less for a distributed architecture than it is for traditional systems. This reliability cost accounts for the production, storage and launch of on-orbit or on-the-ground spares necessary to maintain operations. For a distributed system these spares often represent only small fractions of the initial deployment. For collaborative systems, the system degradation is linear with the number of satellite failures. When the number of satellites drops below that needed to satisfy the size of the market, either replacements must be launched, or the system will incur opportunity costs corresponding to the part of market that is not served. Deployed redundancy simply provides initial capability over and above that necessary to satisfy the market. In the absence of any other compensatory action, the system capabilities will continuously degrade toward the minimum acceptable levels. If enough redundancy is deployed, this point will not be reached within the system's designed lifetime. Redundancy in symbiotic systems has a dierent role. Individual satellite failures do not have a linear relationship with the degradation of the overall system. In fact, a small number of satellite failures may have no noticeable impact on the system capabilities. If however, the number of satellites in the cluster falls below some safe limit, the cluster will simply not operate at all. For example, the users of GPS can obtain navigation solutions provided at least four satellites are in view. Usually, many more than this minimum number are visible from any ground location, but should failures occur such that this is not the case, a navigation solution cannot be obtained at all. This can happen in some ground locations if as few as two satellites fail from the existing constellation of 24 satellites. The consequences of failure, in terms of the opportunity cost, are therefore very much greater for symbiotic systems. For certain satellite missions, a distributed architecture may also lower lifetime costs by reducing the cost of any failure compensation that is necessary. A recent design study at MIT [21] showed that distributed systems appear to yield the greatest cost savings under two conditions:
When the components being distributed make up a large fraction of the system cost. 74
It is prudent to distribute the highest cost components among many satellites. Do not carry all your eggs in one basket!
When the component being distributed drives the replacement schedule of the spacecraft.
These savings manifest themselves in a number of ways. First of all, for the distributed architecture, the replacements represent only a small fraction of the initial deployment whereas in a traditional design, the entire space segment must be replaced after a failure. Also, the replacements, on average, occur later, thus realizing larger savings from discounting to constant year dollars. The potential savings over traditional singular deployments are demonstrated very well in the following example, taken from the MIT design study.
Example 3.5 Replacement Costs: Polar Orbiting Weather Satellites [21]
Instruments aboard polar orbiting weather satellites, such as the proposed NPOESS system, are classi ed as either primary or secondary. Because the primary instruments provide critical environmental data records, failure of a primary instrument necessitates replacement. A secondary instrument is one whose failure may be tolerated without replacement. If an orbital plane's complement of sensors are all located on a single satellite, failure of any primary sensor will require redeployment of all the plane's sensors. By distributing the primary instruments intelligently across a cluster of several smaller spacecraft, it may be possible to reduce the cost of the system over its lifetime because the plane's entire complement of sensors are not redeployed after every failure. Consider the following three con gurations illustrated in Figure 3-7. The blocks labeled as A, B, and C represent three primary instruments required in a given orbital plane. ❶
❷
1 sat /1 sets
A
B
1 sat /2 sets
C
1 satellite per plane 1 set of critical instruments per plane
A
❸
B
C
A
B
C
1 satellite per plane 2 set of critical instruments per plane
3 sats /1 set
B
A
C
3 satellites per plane 1 set of critical instruments per plane
Figure 3-7: Satellite and Sensor Con gurations 75
The total costs over a 10 year mission life were calculated for each of the three cluster con gurations. As shown in Figure 3-8, the costs over the 10 year period are broken down into three categories; namely initial deployment, required spares, and expected replacements. Initial deployment includes the development, production, and launch costs for each orbital plane's original complement of spacecraft. The number of required bus, payload, and launch vehicle spares were derived from a Monte Carlo simulation of the mission, assuming reasonable component reliabilities. Figure 3-8 shows that the initial deployment cost is least expensive for the f1sat/1setg con guration. Adding a redundant sensor to the single satellite con guration greatly increases initial deployment cost in terms of larger bus size, additional instruments, and more expensive launch vehicles. The f3sat/1setg con guration, although being launched on a less expensive vehicle, is slightly more expensive than the f1sat/1setg con guration due to the duplication of bus subsystems and some sensors on each of the three smaller satellites. The gure also shows that adding a redundant sensor increases the cost as compared to con gurations with a single primary instrument. The slight decrease in the failure densities as a result of redundancy does not make up for the expense of additional sensors. Distributing the primary instruments among three satellites signi cantly increases the reliability of each individual satellite. Higher satellite reliability and lower replacement launch costs result in the f3sat/1setg con guration having the lowest expected replacement cost. Once again the slight increase in reliability gained from adding redundant primary instruments for the f1sat/2setg con guration is outweighed by the higher bus, payload, and, launch costs. To summarize, distribution within a satellite mission may reduce the replacement costs over the lifetime of a mission. A modular system bene ts not only because a smaller replacement component has to be constructed, but also because there are huge savings in the deployment of the replacement. These savings are the greatest when the component(s) being distributed make up a large fraction of the system cost and drive the replacement schedule.
3.2 Issues and Problems There are some factors that are critical to the design of a distributed architecture that were irrelevant to the design of traditional systems. Depending on the application, these issues may be minor hurdles, or could be so prohibitive that the adoption of a distributed architecture is unsuitable or impossible. Some of the important considerations, characteristic of all distributed architectures, and particular to small- and microsatellite designs are presented here. 76
0
500
1000
1500
2000
2500
3000
RDT&E
Initial Deployment
Required Spares
0.86 sensor reliability over 7 years 10 yr. bus design life
Expected Replacement
1 sat / 1 set 1 sat / 2 set 3 sats / 1 set
Totals
77
In distributing the functionality of a system among separate satellites, the system is essentially being transformed into a modular information processing network. The satellites, subsystems and ground stations make up individual modules of the system, each with wellde ned interfaces (inputs and outputs) and a nite set of actions. Such systems are analogous to the distributed inter- and intranet computing networks, and as such, are subject to similar mathematics. Distributed computing is a rapidly developing eld and a great deal
Modularization
The potential bene ts from distribution of satellite resources were stressed in the prior sections. It was shown that improvements in cost and performance can result if architectures are carefully designed to utilize a segregation of resources into smaller, more modular components. By allocating individual system functions to separate satellites, and adding redundancy, it was suggested that signi cant cost savings arise from an increased availability and a reduced failure compensation cost. Furthermore, the enhanced capabilities oered by distributed architectures greatly expand the useful applicability of small- and micro-satellites. For these reasons, an important issue to be addressed is the level to which a system should be distributed. How much can the system be divided into smaller components and still oer the bene ts discussed earlier? The central issue here is the trade between the advantages of modularization and the cost of complexity.
3.2.1 Modularity Versus Complexity
satellite system
Figure 3-8: Total system costs over the 10 year mission life of a polar orbiting weather
10 yr non operating cost (FY97$M)
of work has been done to formalize the analyses [22] [23]. A lot of insight can be gleaned by adopting much of this groundwork. One bene cial aspect of modularization comes from an improved fault-tolerance. System reliability is by nature hierarchical in that the correct functioning of the total system depends on the reliability of each of the subsystems and modules of which the system is composed. Early reliability studies [23] showed that the overall system reliability was increased both by applying protective redundancy at as a low a level in the system hierarchy as was economically and technically feasible, and by the functional separation of subsystems into modules with well-de ned interfaces at which malfunctions can be readily detected and contained. Clearly, subdividing the system into low-level redundant modules leads to a multiplication of hardware resources and associated costs. However, the impact of improved reliability over the lifetime of the system can outweigh these extra initial costs. There are additional factors supporting modularization that are speci c to distributed satellite systems. As discussed earlier, the baseline costs associated with a system of small satellites may be lower than for a monolithic satellite design. Of even greater impact is the lower replacement costs required to compensate for failure. A modular system bene ts not only because a smaller replacement component has to be constructed, but also because of the huge savings in its deployment. All of these factors suggest that a system should be separated into modules that are as small as possible. However, there are some distinct disadvantages of low-level modularization that must be considered. The most important of these are the costs and low reliability associated with very complex systems.
Complexity The complexity of a system is well-understood to drive the development costs and can signi cantly impact system reliability. In many cases, complexity leads to poor reliability as a direct result of the increased diculty of system analyses; failure modes were missed or unappreciated during the design process. For a system with a high degree of modularity, these problems can oset all of the bene ts discussed above. Although each satellite in a distributed system might be less complex, being smaller and having lower functionality, the overall complexity of the system is greatly increased. The actual level of complexity exhibited by a system is dicult to quantify. Generally, however, it is accepted that the complexity is directly related to the number of interfaces between the components of the system. Although the actual number of interfaces in any system is architecture speci c, it is certainly true that a distributed system of many satellites has more interfaces than a single satellite design. Network connectivity constraints mean that the 78
number of interfaces can increase geometrically with the number of satellites in a symbiotic architecture. This is an upper bound; collaborative systems exhibit linear increases in interfaces with satellites. The complexity of a distributed system is therefore very sensitive to the number and connectivity of the separate modules. The impact of this additional complexity is dicult to evaluate, especially without a formal de nition of how complexity is measured. Recent studies at MIT [24], [25] would, however, suggest that complexity can cause signi cant increases in development and quali cation time, increases in cost, and losses of system availability. For these reasons, the level of modularization must be carefully chosen. Only with thorough system analysis and ecient program management can the impacts of complexity be minimized.
A Lower Bound on the Size of Component Satellites From very basic analyses, a lower bound on the size of the satellites of a distributed system can be estimated. Recall that there is a continuity constraint on the information ow through satellites. The delivery of the information to the next node in the network need not be immediate. For some applications, it is preferable to use a store-and-forward method of delivery. Here, the information is stored on-board the satellite until such a time that it can be transmitted to the destination node. The continuity constraint therefore enforces that at all times, the information owing into a satellite must be either stored or transmitted. This leads to two simple statements, and two associated bounds. Firstly, in order to maintain extended operations, over the course of a single duty period the net information transmitted by a satellite must be equal to that received. The energy conversion system of the satellite must be able to support this net transmission of information over the same duty period. If the satellite cannot provide enough energy to allow transmission of this quantity of information, requirements cannot be satis ed. The amount of energy required depends on the integrity requirements (driving the energy per bit), the distance to the destination node (free space loss), and the transmitter/receiver characteristics. The satellite must also provide the energy needed to receive the information in the rst place. This is a small factor for passive signal detection, but can dominate in active systems. Secondly, the amount of new information stored on the satellite at any instant is the dierence between the rate of information collected and the rate of transmission. The gives a bound on the minimum data storage requirements placed on satellites. The net storage at some time t is the integration of this dierence, from an initial time at the start of the duty cycle to the current time t. For store-and-forward systems, the value of this integral initially 79
increases with time as more data is stored, and then decreases to zero at the end of the duty cycle, when all the data has been downloaded. The maximum value of this integral de nes the data storage capacity requirement. Note that this can be a very costly requirement to satisfy, especially for remote sensing systems. Consider for example, a single panchromatic image of a 25km square scene at 1m resolution with 256 shades of gray. This single modest image requires 625 megabytes of storage capacity [26]. Compression techniques can help, reducing this gure by as much as an order of magnitude. However, in order to store any reasonable number of images, it is clear that a great deal of storage capacity is required on the satellite. Data storage devices are typically heavy and power hungry, and can consume a substantial portion of the satellite's resources. This is true for large satellites such SPOT and Landsat, that weigh over 1000kg and so it would seem unlikely that small satellites with modest resources could satisfy similar storage requirements. Solid state storage devices are now available that relieve this problem somewhat. In 1993, the state of the art in solid state storage could buer 1Gbit of data in static RAM, consuming only 5W of power [17]. Nevertheless, data storage requirements place a severe constraint on the smallest size for a remote sensing satellite. Of course, in distributing the task of collecting data among the many elements of a distributed system, the storage requirements for each satellite are reduced. This may actually enable large constellations for use in remote sensing applications.
Example 3.6 Data Storage: Distributed Infrared Imaging System
Return to the distributed imaging system described Example 3.4 to estimate the storage requirements on each satellite. Assume each pixel has a 4-bit value, corresponding to 16 shades of gray. The data rate of the IR detector on each satellite is then given by multiplying this 4 bits by the product of the number of pixels across the swath width of the instrument (Nx), and the rate at which they are crossed (V=x). The data must be stored until an opportunity for downlink arises. The maximum downlink interval is set by the requirement on responsiveness of the system. For near real-time applications, downlink opportunities must come frequently. This is helpful for distributed systems since storage capacities are limiting, and the interval between downloads must be as short as possible. Assuming a 25 minute revisit time for imaging, a 5 minute interval between downlinks, and a minimum elevation from the ground station to the satellites of 20 degrees, Figure 3-9 shows the data storage and downlink communication requirements on the satellites. The gure shows that a 1Gbit storage device is sucient provided the system has more than approximately 10 satellites. Communication data rates are high, but manageable for constellations with greater than 50 satellites at altitudes above 400km. 80
1.00E+08
1.00E+09
0
20
60
Number of Satellites
40
Data Storage
Downlink Rate
80
200km 400km 600km 800km
100
1.00E+05
1.00E+06
1.00E+07
81
The most suitable choice depends on the application. Consider, for example, using a cluster to form a sparse aperture for high resolution imaging of terrestrial or astronomical targets. By coherently adding the signals received by several satellites, the cluster would create a sparse aperture many times the size of a real aperture. The phasing and the optical paths of the electromagnetic waves must be carefully controlled so that the signals combine coherently. The tolerance is typically =20. Station keeping is therefore a large problem for a local cluster. Provided that the satellite positions are controlled within this tolerance, the processing requirements on the satellites are reduced. Conversely, the element positions of a
the cluster. The actual constituent satellites and their positions constantly change subject to the orbital dynamics of the constellation.
Virtual clusters in which a subset of satellites from a large constellation make up
of the satellites are controlled within speci ed tolerances.
Local clusters in which a group of satellites y in formation. The relative positions
In Section 3.1.2 it was argued that by combining the capabilities of many individual elements, systems of small- or micro-satellites can be used for high rate or resolution applications. For these applications, the relative positions and dynamics of the satellites in the cluster are a critical factor in the design. There are two options:
3.2.2 Clusters and Constellation Management [27, 28]
25 minute revisit time, 5 minute interval between downloads
Figure 3-9: Data storage and communication data rates for a distributed imager with
Storage per Satellite (bits)
1.00E+10
Downlink Rate (bits/s)
virtual cluster continuously vary, and so to correctly phase the signals these locations must be known with a high degree of accuracy at all times. As a result, the virtual cluster has slack station keeping requirements, but needs a great deal of intersatellite communication and processing to ensure coherence. This coherence issue is discussed later in Section 3.2.3. The remainder of this section details the propulsion requirements necessary to maintain the relative positions of satellites orbiting in a local cluster.
Equations of Motion The relative motion of satellites in a local cluster can be predicted by linearized perturbations of the equations of motion about a reference orbit. The linearization is valid if the cluster diameter is small compared to the radius of the reference orbit. For circular reference orbits, the linearized set of equations are known as the Hill's Equations [29],
x , 2y_ ,3n2x = ax y + 2x_ = ay z +n2 z = az
(3.9)
where n is the frequency of the reference orbit in radians per second, and the acceleration terms on the right-hand side represent all non-central force eects (drag, thrust, gravity perturbations, etc.) The right-handed local coordinate frame has x pointing up and y aligned with the velocity direction of the reference orbit. These equations can be used to estimate the propulsive requirements placed on satellites constrained to orbit in clusters. The dierent cluster con gurations are de ned by dierent degrees of freedom in (x; y; z ) in Eqn. 3.9. There are, in fact, many ways to create local clusters. Two will be considered here. The rst is to y the satellites in rigid formation, maintaining their relative positions and orientation. This involves constant values of (x; y; z ) for each satellite, since the position is xed relative to the reference orbit. The second option, which proves to be usually more realizable, is to allow the cluster con guration to rotate, maintaining only the relative intersatellite separations. This option has (x; y; z ) that are constrained to follow circular trajectories around the reference orbit. The propulsion requirements for each of these options are described below.
Rigid clusters Since the reference orbit is assumed to be a circular Keplarian orbit, rigid clusters must feature some satellites in non-Keplarian orbits. These non-Keplarian orbits are characterized 82
by either a focus which is not located at the Earth's center of mass, or orbital velocities which do not provide the proper centrifugal acceleration to oset gravity at that altitude. The Earth's gravity will act to move these satellites into Keplerian orbits, giving rise to \tidal" accelerations exerted on the satellites that are a function of the cluster baseline and orbit altitude. To maintain relative position within the cluster, these accelerations must be counteracted by thrusting. The required amount of thrusting to maintain the cluster for a single orbit can be estimated from Eqn. 3.9, by setting all time varying terms to zero and integrating over a single orbital period (=2=n). The result is that, p
V (per orbit) = 2n 9x2 + z 2
(3.10)
p
If the cluster diameter is R0 = x2 + y 2 + z 2 , the above result suggests that the V requirements for rigid clusters scale to rst order as 10nR0 ms,1 per orbit. At LEO altitudes, n 0:001 rads/sec, and so V 0:01R0 ms,1 per orbit. For a particular propulsion speci c impulse1 (Isp ), and propellant mass fraction fp , the lifetime as a function of the V per orbit is [28], orbit years Lifetime (years) = , orbit V Isp g ln (1 , fp )
(3.11)
where g is the gravitational eld strength. For reasonable propellant mass fractions of around 10%, even propulsion systems with high speci c impulse (2600 sec) cannot maintain a 100m cluster in LEO for more than six months. This makes the implementation of rigid clusters extremely unlikely.
Circular (dynamic) clusters An alternative to holding the clusters rigidly is to allow the satellites to rotate in circles around around each other, in a plane de ned by a normal aligned in the viewing direction, such that their relative separations are preserved. In this case, (x; y; z ) are only constrained to lie on a circle. The period of rotation of the circle is the same as the orbital period, so that the satellites have the same natural frequency as the reference orbit. In the plane of the cluster, the general motions of the satellites are described by,
x0 = 0 y 0 = R0 cos (nt + ) 1 The
speci c impulse of a thruster is a measure of the thrust per unit mass ow rate of propellant
83
z0 = R0 sin (nt + )
(3.12)
where t is time and is a phase angle. These equations can be transformed into the Hill frame by standard rotational transformation through angles (; ) corresponding to the azimuthal and elevation angles of the line of sight to the nadir (negative x) direction. This gives the constraints on (x; y; z ) for satellites in the Hill frame. Sedwick [28] integrates Eqn. 3.9 with these constraints for all values of (; ) over a single orbit, and concludes that at worst, the V per orbit scales as 3nR0. From the same arguments as were made for rigid clusters, this leads to lifetimes of at most 18 months for LEO clusters of 100m diameter. However, there are some angles (side-looking at 30o o-nadir), at which there is no V required to maintain the circular con guration. These represent free-orbit solutions to the problem. If this o-axis viewing angle can be tolerated, the propulsion requirements are reduced to only that needed to overcome perturbations. Over reasonable cluster sizes, these perturbations exert negligible dierential forces to distort the cluster, and only act to perturb the cluster as a rigid body. The results presented in this section would suggest that maintaining clusters is prohibitively dicult if the cluster is required to move only as a rigid body. This is unfortunately the requirement placed on optical interferometers, that need dierential paths to be preserved very accurately such that the same wavefront is measured at the dierent apertures in real-time. However, if a sparse array is to be formed at radio frequencies, there is a possibility for the signals from dierent satellites to be combined during post-processing after digitization. Time delay can be easily introduced during the interfering process, and the distance of any given satellite from the source is no longer an issue. This relaxes a degree-of-freedom in Eqn. 3.9, since the satellites are no longer bound to move in a plane. No results have yet been presented, but it is suggested that great propellant saving will be realized from allowing this behavior [28].
3.2.3 Spacecraft Arrays and Coherence Some people have proposed the use of large symbiotic clustellations of satellites, each with a small antenna, for forming extremely large sparse apertures in space. This spacecraft array concept has received a great deal of attention from many sectors of the space community, due mostly to the potential it oers for high resolution sensing and communications. The symbiotic spacecraft array idea was introduced in Example 3.1 and discussed in section 3.1.2. To recap, a large thinned aperture is formed from a set of satellites, each acting as a single radiator element. The angular resolution of any aperture scales with the overall aperture dimension, expressed in wavelengths. The SNR achievable by the array is directly proportional to the number of constituent elements. The spacing of the elements is 84
much larger than one-half of the wavelength, and so grating lobes are avoided only if there are no periodicities in the elemental locations. Unfortunately, there are many technical diculties involved with the design and construction of such a system, mainly due to the requirement for signal coherence between large numbers of widely separated apertures. This is especially true for systems intended for Earth observation. Interferometric techniques are not well suited to Earth observation from orbit since the Earth forms an extended source, unlike the astronomical sources which lie embedded in a cold cosmic background. This forces a need for very high SNR's and high sampling densities [30], leading to designs featuring a large number of satellites. For high resolution imaging applications, requiring either long integration times or high SNRs, the situation is made worse by forward motion of LEO satellites limiting the time over the target. This forces more simultaneous measurements to be made in order to reach the required SNR and therefore requires even greater numbers of elements. Furthermore, although there may be no grating lobes to consider, the thinned and random array will exhibit large average sidelobe levels. For randomly distributed arrays, the ratio of the power in the main lobe to the average sidelobe power is approximately equal to the number of elements in the array [31]. For most detection applications, the maximum sidelobe power should be much lower (more than 10dB lower) than the main beam power. Using this measure gives bounds on the order of 10 for the minimum number of satellites that must be used to form the sparse array. As will be shown later in Chapter 7, this is a reasonable approximation. The formation of sparse apertures using a large number of satellites is complicated by data processing, presenting a barrier to the adoption of sparse array technology. The most generic problems, common to both active and passive clusters, involve using spacecraft arrays as receivers. The signals from each element of a receiver must be combined coherently. The data processing requirement scales quadratically with the number of elements, and the equipment becomes very costly as the aperture size grows. The actual exchange of signals between receivers and combiners also poses a dicult challenge. For an interferometer the exchange is done simply by routing the analogue signals from the pair of collectors to a common combiner, constraining the optical paths to be equal in each case. It is dicult to adopt the same strategy for arrays with many elements. Since the satellites are remote from each other, there is no easy way of simultaneously combining the signals from all of the dierent elements in an analogue form. For these arrays, the combining is easier done during post-processing after digitizing the signals while preserving the phase. This eectively limits the applicability of spacecraft arrays for passive sensing. A passive receiving spacecraft array must record information over a reasonably long period of time to integrate the SNR. This then necessitates enormous storage capacity on board each satellite, since all the phase information must be preserved. Sampling the carrier wave at 85
the Nyquist limit with 8 bit quantization would result in storage rates of 96 Gb/s for an X-band detector. Even with high-speed buers, the required storage capacity after only a few seconds of integration time is prohibitive. Of course, the receiver can lter and mix the input signal down to a lower intermediate frequency (IF) before the A-D conversion, greatly reducing the load on the data processing. This results in no loss of information provided the information bandwidth is known to be small compared to the carrier frequency. In general, the bandwidth of the information may be as high as the receiver bandwidth. Sometimes, however, the nature of the target is such that the information content is known to be bandlimited over a reasonable range (kHz-MHz). In these cases digitization and storage can still be problematic, but at least manageable. An active symbiotic system may bene t here, since the characteristics of the transmitted signal are known, and the information content is limited only to the changes observed in the received signal. Spacecraft arrays are also unsuitable as active transmitters for tracking applications. These systems track targets with a narrow beam, optimizing the signal to noise from the target while nulling the clutter and noise. Correct phasing of the array at the desired angle and range to illuminate the target must be performed in real-time. Returns from the target are used by a feedback controller to vary the phase at each element in order to steer the beam. To do this, each array element must have accurate information about the relative position of all other array elements. Continuous communication between satellites is needed. Furthermore, the time constant for the detection (including signal reception, combining, processing, and phasing of the transmissions) must match the dynamics of the target. For small local clusters, the slow dynamics of the array may allow this to be carried out if the processing capability exists. For virtual clusters this would be a very tricky task given the dynamic nature of the system.
3.3 Summary Distributed architectures are enabling for small satellite designs because they expand their useful range of applications to include high rate and resolution sensing and communications. The capabilities of many small satellites are combined to satisfy mission requirements. A distributed architecture makes sense if it can oer reduced cost or improved capabilities. Distribution can oer improvements in isolation, rate, integrity and availability. The improvements are not all-encompassing, and in many cases are application speci c. Nevertheless, it appears that adopting a distributed architecture can result in substantial gains compared to traditional deployments. Some of the more important advantages that distribution may oer are:
Improved isolation corresponding to the large baselines that are possible with widely 86
separated antennas on separate spacecraft within a cluster.
Higher net rate of information transfer, achieved by combining the capacities of several satellites in order to satisfy the local and global demand.
Improved availability through a reduced variance in the coverage of target regions.
This reduces the need to \overdesign" and provides more opportunities for a favorable viewing geometry.
Staged deployment on an as-needed or as-aorded basis. Progressive technology insertion and modular upgradeability, reducing the impact of technology-freeze dates.
Improved reliability through redundancy and path diversity. Lower failure compensation costs due to the separation of important system components among many satellites; only those components that break need replacement.
There are some problems, speci c to distributed systems of small satellites, that must be solved before the potential of distributed architectures can be fully exploited. The most notable of these problems are:
An increase of system complexity, leading to long development time and high costs Inadequacy of the data storage capacity that can be supported by the modest bus resources on micro-satellites
Diculty of maintaining signal coherence among the apertures of separated spacecraft arrays, especially when the target is highly-dynamic
Need for autonomous operations; if autonomy is not implemented, operations costs will dominate, and for symbiotic systems human intervention may not be suciently timely.
The resolution of these issues, and the proliferation of microtechnology, could lead toward a drastic change in the satellite industry. It seems clear that distribution oers a viable and attractive alternative for some missions. Large constellations of hundreds or thousands of small- and micro-satellites could feasibly perform many of the missions currently being carried out by traditional satellites today. For some of those missions, the utility and suitability of distributed systems looks very promising. More analysis is warranted in order to completely answer the question of where and when distribution is best applied, but the potential prospects of huge cost savings and improvements in performance are impossible 87
to ignore. It therefore seems inevitable that massively distributed satellite systems will be developed in both the commercial and military sectors. We are living in a time of great changes, and the space industry has not escaped. Over the last few years, \faster, cheaper, better" has been the battle cry of those engineers and administrators trying to instigate changes to improve the industry. \Smaller, modular, distributed" may be their next verse.
88
Chapter 4
Development of the Quantitative Generalized Information Network Analysis (GINA) Methodology If you know a thing only qualitatively, you know it no more than vaguely. If you know it quantitatively |grasping some numerical measure that distinguishes it from an in nite number of other possibilities |you are beginning to know it deeply. You comprehend some of its beauty and you gain access to its power and the understanding it provides. Being afraid of quanti cation is tantamount to disenfranchising yourself, giving up on one of the most potent prospects for understanding and changing the world
Carl Sagan, \Billions & Billions", 1997
4.1 Motivation There are many dierent ways to design satellite systems to perform essentially the same task. In order to compare alternate designs, metrics are required that fairly judge the capabilities and performance of the dierent systems in carrying out the required task. In today's economic climate, there is also a requirement to consider the monetary cost associated with dierent levels of performance. Due to the extremely large capital investment required for any space venture, it is especially important for satellite designers to provide the customer with the best value. The case in point here is that for a distributed system to make sense compared to another way of achieving the function, it must oer reduced cost for similar levels of performance. This hints to the possible bene ts of a de nable cost per (functional) performance metric. Capability, performance and cost metrics can be used as 89
design tools by addressing the sensitivity in performance and cost to changes in the system components, or by identifying the key technology drivers. This leads to the de nition of the adaptability metric that quanti ably measures the sensitivity to changes in the design or role. Any metric used for comparative analysis should be quanti able and unambiguous. A measurable metric therefore requires a formal de nition that leads to a calculable expression. Unfortunately, satellite engineering analysis has traditionally been treated on a case-by-case basis. Each new satellite system is designed and judged by its own set of rules for a speci c, narrowly de ned task. This has meant that any formal de nition of a metric has been speci c and relevant only to systems of the same architecture. It is therefore necessary to develop a generalized and formal framework for de ning quanti able metrics for performance and cost, capability, and adaptability. A major goal of this research has been to formally de ne the three metrics of Capability, Cost per Function, and Adaptability, such that the analysis techniques are common to all systems, regardless of application or architecture.
4.2 Satellite Systems as Information Transfer Networks The primary enabler for a generalized analysis framework is that for all current applications, satellite systems essentially perform the task of collection and dissemination of information.
4.2.1 De nition of the Market Information transfer systems exist only to serve a market; a demand that speci c information symbols be transferred from some set of sources to a dierent set of, presumably remote sinks. This origin-destination (O-D) market is distinct from the systems built to satisfy it, and is de ned by the requirements of the end-users (at the sinks). In most cases the information is transferred in the form of a digital data stream1. The information symbol is the atomic piece of information demanded by these end-users. The symbol for communication systems is either a single bit or a collection of bits. For imaging systems, the symbol is an image of a particular scene. This compound symbol has many component pixels, each of which has some value, de ned by a sequence of data bits. The symbol is the image and not the individual pixel because pixels on their own carry little or no information and are of no use to the end-user, who demands images. As described in Chapter 2, the symbol for a navigation system is a user navigation solution. This is an interesting example since it demonstrates the distinction between the market 1 Even
those systems featuring analogue detection, such as optical imaging, almost always feature analogue-digital conversion before transmission to the end user
90
and the system implemented to satisfy it. The NAVSTAR GPS system does not transfer user navigation solutions through its satellites, but simply relays the satellites position and time to the end-users to enable a range measurement between the user and the satellite. If the user terminals are able to calculate the pseudoranges to at least four satellites, the required information symbol can be constructed. Note that the necessary information is embedded in the signals from several dierent satellites, and is only \assembled" into the required form inside the user terminal. This dichotomy between supply and demand is common in network ow problems and is essential for the analysis of augmented or hybrid architectures. These are architectures composed of several dissimilar systems that together perform the overall mission. An example is the combined use of space assets and unmanned aerial vehicles (UAVs) for battle eld reconnaissance.
4.2.2 Functional Decomposition and Hierarchical Modeling A satellite system can be represented as a modular information processing network. The satellites, subsystems and ground stations make up individual modules of the system, each with well-de ned interfaces (inputs and outputs) and a nite set of actions. This abstraction allows satellite system analysis to be treated as a network ow problem. System analysis is then reduced to characterizing how to: \move some entity [information] from one point to another in an underlying network . . . as eciently as possible, both to provide good service to the users . . . and to use the underlying transmission facilities eectively" [32]. The network representation of the satellite system provides the framework for quantitative system analysis, based on the mathematics of information transmission and network
ow. If the interaction between each module and the information signal can be estimated, the characteristics of the information arriving at the sinks can be calculated. Correct representation of the satellite system as an information network requires a functional decomposition of the system into its most important functional modules. The functional modules are those elements of the system that impact the transfer of information from source to sink. Note that functional modules do not necessarily represent system hardware; a rain cloud can assuredly eect radio communication to a satellite, but it is not conventionally considered a system component. In fact, other than for component reliability estimation, the actual hardware con guration of a subsystem is of little interest to the network modeler. Of much greater importance is correctly modeling the functional interaction between a module and the information signals being transferred. Figure 4-1 shows a simpli ed network for a system consisting of a single communication satellite. The system transfers data between a set of users utilizing eight spot beams, which 91
are the input and output interfaces for the satellite. Sources
Sinks Rx spot beams
Tx spot beams
Comm. Satellite
Figure 4-1: Top-level network representation of a single communication satellite At this most basic level of abstraction, the network is modeled to comprise only the source and sink nodes2 , the satellite node and the interfaces between them. This level of detail is probably too simplistic for any useful system analysis. Figure 4-2 shows the network for the same system modeled with a ner level of functional decomposition. In this more detailed model, the signal from a source node passes through modules representing the eects of atmospheric rain attenuation, space loss, and cross-channel interference, before being collected at a receiver module on the satellite. For diagramatical simplicity, only one spot beam is drawn on the uplink. The signal from this receiver is passed (along with the signals from the other seven receivers not shown in the diagram) through a multi-channel module representing the satellite digital signal processor (DSP). This module interprets the information symbols and re-routes them to the correct satellite transmit modules. Again, only one channel is shown for simplicity. The downlink has similar attenuation and interference modules, a user receiver and a DSP, and terminates at a sink node. Clearly, this lower-level model is a more accurate representation of the real system. The network model can be further augmented by including additional support modules that are not part of the primary information pathway. For instance, modules representing the power generation system, the propulsion system or attitude control system of the satellite could be added. These support modules provide the other primary functional modules with enabling signals (power, propulsion, control, etc.). The functional modules must receive these enabling signals in order to transfer the information symbols correctly. The inclusion of these support modules in the network adds a further level of detail to the analyses. 2 Provided
a single node.
their interface with the network is similar, the users within each spot beam can be grouped as
92
Rain
DSP
TX
Spaceloss
Rain
Interferers
RX
DSP
Figure 4-2: Detailed network representation of a communication satellite
TX
RX
Sink
93
The system's ability to isolate and identify signals from dierent sources within the eld of view is a critical mission driver for many applications. Obviously, a system cannot satisfactorily transfer information between speci c O-D pairs unless the individual sources and sinks can be identi ed and isolated. Various methods are used to isolate the dierent signals. For communication systems, common isolation schemes separate the signals in frequency (Frequency Division Multiple Access, FDMA) or time (Time Division Multiple Access, TDMA). Also, individual spot beams can be used to access multiple sources that are spatially separated. The same techniques can be applied to radar systems. Doppler frequency shifts are used for identi cation of the target velocity and clutter rejection, and time gating is used for target ranging. Scanning a small radar beam over a large area allows separate targets to be isolated in space to within a beamwidth. For imaging and
4.3.1 Signal Isolation
Information collection and dissemination always requires the detection of information bearing signals in the presence of noise and interference. The capabilities of digital data transfer systems can be characterized by four important quality-of-service parameters relating to the detection process and to the quantity, quality and availability of the information: signal isolation, information rate, information integrity, and information availability.
4.3 The Capability Characteristics
This hierarchical nature of the network modeling allows the detail and accuracy of the analyses to be customized depending on the application. For example, at the conceptual design stage, the analyses may only have to predict the feasibility of the architecture. For this level of analysis, only the essential functional modules must be included. Later in the design process, more detail can be added to obtain accurate predictions of the capabilities of the entire system.
Source
Spaceloss
Interferers
remote sensing systems, the same principals apply. Dierent sources can be identi ed by detecting in dierent frequency bands. Spatially separated sources can be isolated using a high resolution detector. An aperture can distinguish between sources that are separated by a distance no less than the resolution of the aperture. Note the one-to-one correspondence between:
The resolution of an optic and the beamwidth of an antenna or a radar The frequency of radiation from a remote sensing pixel, the carrier frequency of a communication signal, and the Doppler shifts of a radar signal.
The generality exhibited in the mathematics of signal analysis allows these isolation relationships to be formalized.
4.3.2 Generalized Signal Isolation and Interference Information transfer systems must be able to isolate a given signal from any others that may be present. If the dierent signals cannot be distinguished, the cross-source interference will introduce noise that could cause an erroneous interpretation of the information. In general, a signal can be expressed in either of two domains; the physical domain x or the Fourier domain s. These two domains are related by the Fourier transform. For electrical signals, such as in communications, the physical domain is time t, while the Fourier domain is frequency f . Either domain can be used for analysis, although it is often easier to perform the calculations in the frequency-domain. For example, consider the simple linear system shown in Figure 4-3. The time-domain output r(t) is given by the convolution of the input signal i(t) and the impulse-response of the system p(t). Equivalently, in the Fourier domain the output R(f ) is given by multiplying the spectra of the input signal I (f ) and the frequency response of the system P (f ). This duality-relationship is shown in Eqn. 4.1,
r(t) = i(t) p(t) $ R(f ) = I (f )P (f ) i(t)
p(t)
(4.1)
r(t) R(f)
I(f)
Figure 4-3: Simple linear-time-invariant system Note that a square low-pass lter of bandwidth W Hz has a time-domain impulseresponse equal to a sinc function with a half-width of 1=W seconds, as shown in Figure 4-4. This basically means that two time-domain impulse signals passing through the lter can 94
be isolated only if their time separation is greater than this minimum value. The cut-o frequency W eectively limits the lters ability to transfer time-domain information. p(t)
P(f)
f
t
W
2/W
Figure 4-4: A square low-pass lter and its time-domain response There is an exact analogy to these relationships for optics and antenna theory [33] [34]. The corresponding Fourier-transform pair is the angle between the propagation direction of the radiation and the normal of the antenna, measured as sin , and a spatial coordinate along the antenna (measured in wavelengths) referred to as the spatial-frequency u. It is convenient to consider sin as the physical variable, and u as the Fourier variable, although the choice is arbitrary due to the symmetry of the Fourier-transform. The analogy with electrical signal theory allows most of the properties relating to ltering and processing of time-domain electrical signals to be extended to antennas and optics. For example, consider the one-dimensional antenna shown in Figure 4-5. The antenna images an unknown \object" distribution i(sin ) by ltering the object signal with a low-pass lter. An aperture or optic is a spatial lter since it samples only those parts of the signal within its spatial extent. The output image (in the angular domain) is equal to the convolution of the input signal i(sin ) and the impulse-response of the aperture, de ned as the radiation pattern p(sin ). Equivalently, the Fourier-domain output is given by the product of the input signal I (u) and the aperture (illumination) distribution P (u).
r(sin ) = i(sin ) p(sin ) $ R(u) = I (u)P (u)
i(sinθ)
p(sinθ)
(4.2)
r(sinθ) R(u)
I(u)
Figure 4-5: Basic antenna model Note that the angular radiation pattern of an aperture is equal to the Fourier-transform of the aperture distribution. That is, it is the response of the antenna to uniform illumina95
tion over its extent. For a rectangular aperture of size D=, this response is a sinc function of half-width sin = =D, as shown in Figure 4-6. The position of this rst null in the radiation pattern corresponds to the angular resolution of the aperture, since it determines the minimum angular separation of two point sources that can be successfully isolated. The cut-o frequency u0 = D= limits the ability of the antenna in transferring angular information. This property corresponds precisely to the earlier-stated isolation capabilities of electrical lters.3 p(sinθ)
P(u)
u
sinθ 2λ/D
D/λ
Figure 4-6: A rectangular aperture distribution and its radiation pattern The similarities between the isolation characteristics of electrical systems and antenna systems are pervasive. The same principles of signal theory apply for both applications, and generalizations can be made about the isolation capabilities of a general system. Signals can be isolated only if they are distinct and separable. Clearly, two signals that are separated in either the physical or Fourier domain satisfy this condition. For example, two electrical signals with non-overlapping frequency bands can be isolated using a pair of bandpass lters. Similarly, two time-bounded signals transmitted sequentially can be isolated using a simple time gate. However, the condition that the signals be \distinct and separable" does not restrict them to exclusive occupation of part of one of the two domains. It is possible for a set of signals to occupy the same parts of the physical and Fourier domains, and still be distinguished, albeit with some amount of interference. To better understand what is meant by \distinct and separable", it is helpful to adopt the signal space interpretation of signal analysis. Here, a geometrical linear space is de ned by a set of real or complex vectors that represent a set of real or complex signals [35] [36]. This approach is useful in signal analysis because it allows a number of mathematically equivalent problems to be treated with a common notation and a common solution. Signal spaces are Hilbert spaces [37]. A Hilbert space is a vector space on which an inner product is de ned such that all norms are nite. The set of all square-integrable (L2) real signals is a real Hilbert vector space under pointwise addition and multiplication by scalars in 0, and zero elsewhere. This ISI term is signi cant if the system involves a sampling of the signal into discrete (digital) components. In this case, F (s) is a periodic, aliased spectrum and (F (s) , 1) can have positive values. Digital communication systems can be designed to give zero interference at the sampler output by enforcing that the signals satisfy the Generalized Nyquist criteria [36]. This basically requires each signal to be orthogonal to its translates by multiples of the sampling interval and also to all translates of the other signals. Of course, it is extremely unlikely that this condition be satis ed for remote sensing systems since the signals are externally generated. 98
The interference power at the output of a system is the squared-magnitude of the ltered interfering signals, integrated in the domain in which the desired signal is bounded, and over the same limits. For instance, the interference power at the output of a matched lter designed to isolate a signal bounded in the Fourier domain is equal to the power spectrum of all interfering signals, integrated over the bandwidth of the matched lter. Similarly, the interference power at the output of a system designed to isolate a signal bounded to [0; x] in the physical domain is the total power of the ltered interfering signals within the physical limits [0; x].
4.3.3 Information Rate This is a measure of the rate at which the system transfers information symbols between each origin-destination pair. This is most familiarly associated with the data rate for communication systems. The revisit rate is the corresponding parameter for imaging systems. The system must deliver information symbols at a rate that matches the characteristic bandwidth of the source or the end-user. For instance, a high-speed cruise-missile must be tracked with a high sampling rate. Similarly, a GPS receiver on a high-dynamic aircraft must receive information from the satellites at a rate that is sucient to allow navigation solutions to be updated very quickly. While most information markets require a source to be sampled repeatedly, there are some that involve the transfer of only a single symbol from each source. These are \trigger" markets, in which there is a demand for potential sources to be interrogated until a particular event occurs, triggering a response. The system must be able to notify the end-users of this occurrence within acceptable time bounds, that are once again related to the dynamics of the problem. For these trigger markets the corresponding quality-of-service parameter is time rather than rate. For example, an missile warning system must be able to detect launches originating from particular ground locations within a time period that allows an eective defensive response. Note that in many cases, the triggering of sources in these trigger markets creates a whole new market, corresponding to a new set of sources with dierent demands. The early warning mission (in which the sources are ground cells that have the potential for launch) triggers a missile tracking mission that requires rapid sampling of a target. These two missions may or may not be addressed by the same system.
4.3.4 Information Integrity This measures the error performance of the system. The integrity is most commonly represented by the probability of making an error in the interpretation of a signal based on noisy observations. For communications, the integrity is measured by the bit error rate. 99
The integrity of a search radar system is characterized by both the probability of a missed detection and the probability of a false alarm, since each constitutes an error in interpretation. Equivalently, the integrity of an imaging system could be measured by the pixel error density within the image. The error performance of data collection and transfer systems is a critical issue in their design and operation [36]. A detector uses an observation of the signal plus noise to make a decision about each information symbol. Generally, the probability of erroneously interpreting an information symbol depends on the energy in the symbol. An error can occur if noise or interference degrades the signal in such a way that an incorrect decision is made about the observation. These errors can be as benign as a single bit error in a communication message, or as consequential as a false alarm for an early warning radar system. The probability of error for a single measurement is the likelihood that the interfering and thermal noise power exceeds some threshold, equal to the dierence between information data values. Consider for example, the simplest case of an amplitude modulated binary communication channel (binary PAM) . The two data values f0; 1g are represented by two dierent power levels of the passband carrier wave. The separation between these power levels is d watts. If the noise component of the signal has a power level greater than d=2 watts, a data symbol f0g can appear in the observation as a f1g, or vice-versa. The probability of an error of a single bit is then the probability that the noise power is greater than the separation between data symbols. This is equal to the area under the noise probability density function from [d=2; 1], as shown in Figure 4-8. g(x) Pr(error)
d/2
x
Figure 4-8: The probability of error is the integral under the noise probability density
function from [d=2; ] 1
For generality, this can be placed in the context of the signal space representation of signals introduced in section 4.3.2. Consider an information transfer system that makes an observation, known to be equal to one of two potential symbols, but distorted by noise. The two possible information symbols have signal space representations s~1 and s~2 , such that the vector between them is (s~1 , s~2 ). De ne the length of this vector, equal to the separation between the symbols, to be d. The task of the detector is to determine which of the two 100
possible symbols is the correct interpretation of the noisy observation. The decision rule used is based on the position of the observed signal projected into the same signal space. In general, the observation will not be coincident with either the two possible symbols, due to the presence of additive noise. The actual position in the signal space of the observation will be equal to the position of the underlying information symbol, plus the geometrically correct vector representation of the noise, according to standard rules of vector addition. Usually the symbol closest to the observation, among all those that are possible, is chosen by the detector. For Maximum Likelihood (ML) detection with hard decisions, this corresponds to a decision threshold along the bisector between the two possible signals, at perpendicular distance of d=2 from each. A decision error will therefore be made if the projection of the noise in the direction of (s~1 , s~2 ), is greater than d=2. For additive noise with a probability density function g (x) the probability of this error occurring is, Pr(error) =
Z
1
d=2
g (x) dx
(4.5)
If there are more than two possible information symbols from which to choose, the net error probability for a given symbol is the sum of the probabilities calculated from Eqn 4.5 for each value of d=2 corresponding to the dierent pairs of symbols. This can be approximated from the Union Bound estimate [36], in which the assumption is made that the closest pairs of symbols dominate the sum. If a given symbol has Kmin nearest neighbors at a common distance d, then an estimate for the error probability is, Pr(error) Kmin
1 g (x) dx d=2
Z
(4.6)
When g (x) is stationary white Gaussian noise with zero mean and variance 2, Eqn. 4.6 becomes !
Z 1 2 1 p Pr(error) Kmin exp ,2x2 dx 2 d=2 Kmin 12 erfc 2dp2 !
2 Kmin Q0 4d2 2 ! d Kmin Q0 2N 0
(4.7) (4.8) (4.9) (4.10)
where Q0 () is the Gaussian complementary distribution function, often simply called the \q-function". N0 = 2 2 is the average noise power per Hertz. Note that the above equations represent the symbol error probability. In all but the simplest communication schemes, each 101
symbol represents more than a single bit of information. For example, the Quadrature Phase Shift Keying (QPSK) modulation scheme used in most satellite communication applications, the phase of the carrier wave is varied to transmit information, such that each of four possible equal-power symbols represents a pair of data values, as shown in Figure 4-9. In most well designed signal sets, adjacent symbols dier only by a single information bit. In these cases, an error in the interpretation of a multi-bit symbol results in only a single bit error. If each symbol represents m bits, then the probability of bit error, in terms of NEb0 is, 2 Pr(bit error) Kmmin Q0 4dE 2NEb b 0 2 E Kb Q0 c N b 0
!
(4.11) (4.12)
where Kb = Kmmin is the average number of nearest neighbors per bit. c = 4dE2b is de ned as the nominal coding gain [37], a measure of the improvement of a given signal set compared to uncoded binary PAM, in which c = 1. For QPSK, there is no coding gain since d2 = 4Eb, as shown in Figure 4-9. Also Kb = 1, and so the bit error rate (BER) is,
BER Q0 2Eb
(4.13)
N0
Additional coding gain can be attained with error-correction coding, that involves further separating the symbols in signal space. For example, QPSK with half-rate Viterbi error correction has c = 2, such that BER Q0 4NEb
(4.14)
0
Note that g (x) in Eqn 4.5 is the probability density function of the noise signal at the input to the detector that makes the decisions. This may dier from the density function of the noise at the input to the antenna due to the eects of lters and ampli ers upstream of the detector. For example, consider a simple radar system. A positive radar detection is declared if the envelope (complex amplitude) of the received signal exceeds some predetermined threshold. A radar detector therefore includes an envelope detector, to measure the envelope of the signal, and a threshold detector to actually make the decisions. If the noise entering the envelope detector has a Gaussian probability density function with zero mean and variance 2, the probability density function of the noise at output of envelope detector is a Rayleigh distribution [39], 2 g (x) = x2 exp ,2x2
102
!
(4.15)
Quadrature (Q) [1,1] d (d/√2) [1,0] In-Phase (I) [0,1] Energy per symbol = (d 2 /2) Two bits per symbol ⇒ Eb=(d 2 /4)
[0,0]
Figure 4-9: The signal space representation of QPSK. The four information symbols
dier in phase, while their amplitude is constant.
In this case, the probability of error, or false alarm, is given by Eqn 4.6 with Kmin = 1 and d=2 = vT , the threshold voltage, such that, Pr(false alarm) =
Z
1 vT
2 g (x) dx = exp ,2vT2
!
(4.16)
4.3.5 Information Availability The availability measures the instantaneous probability that information is being transferred through the network between a given number of known and identi ed origin-destination pairs at a given rate and integrity. The availability is a measure of the mean and variance of the isolation, rate and integrity supportable by the system, and as such is sensitive to worst-case scenarios. Note that availability has a functional de nition; it is the probability that the system can instantaneously perform speci c functions. In this way, the availability is not a statement about component reliabilities. At any instant, the network architecture is de ned only by its operational components, and so all networks are assumed to be instantaneously failure-free. Should a component fail, the network changes by the removal of that component. Generally, the capabilities of the new network will be dierent than those of the previous network. For a given network, the supportable isolation, rate, integrity, and hence the availability, can vary due to:
The number of users simultaneously accessing the limited resources of
the system. The availability of service to a given user will be poor if the total
number of users approaches or exceeds the nominal operating capacity of the system. 103
Viewing geometry and coverage variations. A system that cannot support
continuous coverage of a region will have a low availability for real-time applications. The availability of high-accuracy navigation solutions (SEP4 16m) using GPS is dependent on a favorable viewing geometry to several satellites. Spatial and temporal variations in this Geometrical Dilution of Precision (GDOP) dominate the operational availability of GPS. Imaging applications often require speci c viewing geometries for each image, eectively limiting the availability of a LEO remote sensing system to those times that such a geometry occurs.
Range variations due to the different elevation angles between the
users and the satellites. This is especially true for LEO communication systems
in which the range, and hence free space loss, changes dramatically as the satellite passes overhead.
Signal attenuation from blockage, rain or clouds. Clearly atmospheric attenuation can vary geographically and temporally, and the impact on the availability of service can be profound. Visible or ultra-violet imaging is impossible through cloud cover, limiting the availability of such systems. A mobile user of the Big-LEO communication systems will be very susceptible to signal fade from blockage, either by buildings or foliage.
Statistical fluctuations due to noise or clutter. These random variations may be signi cant if the system is operating close to the limits of its capabilities.
4.4 Calculating the Capability Characteristics These characteristics de ne the Capability of the system, that being the availability of providing an information transfer service between a given number of identi ed O-D pairs at a given rate and integrity. The Capability characteristics are probabilistic measures. The availability is a function of three variables; rate, integrity, and the number of users. For satellite applications, the information rate is usually a deterministic design decision. However, the integrity and the number of simultaneous users can be considered random variables, the former being sensitive any variations in the signal power or noise, and the latter being dependent on the market. While it is often dicult to predict the statistics of the market, probability distribution functions for the signal power and the noise power can be predicted reasonably well from the statistics of the satellite's orbit and elevation angle, probabilistic blockage or rain attenuation models, and component performance speci cations. 4 Spherical
Error Probable is the sphere containing 50% of observations.
104
Calculating the Capability characteristics therefore involves tracking the statistics of the information signals delivered to the end-users. The network representation of satellite systems provides the framework for these calculations. Statistical distributions can be propagated through a network sequentially, calculating the changes to the distribution functions as a result of the transitions through each node along a path from source to sink. Consider an arbitrary system component with input signals X and Y and an output signal Z , as shown in Figure 4-10. X and Y can be treated as random variables with distribution functions F1 (x) and F2 (y ), and probability density functions f1 (x) and f2 (y ), such that,
F1(x) = Pr(X x) = F2(y ) = Pr(Y y ) =
Z
x
f1(v)dv
,1 y f2(v)dv ,1
Z
(4.17) (4.18)
X Z Y
Figure 4-10: A simple system with input signals X and Y , and an output signal Z If the output z = g (x) is a function of only one input x, then the random variable Z has a distribution function Fz (z ) given by,
Fz (z ) = Pr (Z z ) = Fz (f (x)) = Fx (x)
(4.19) (4.20)
Generally, the output is a function of more than one input, such that z = g (x; y ). If X and Y are independent,
Fz (z ) = Pr (Z z ) =
ZZ
f1 (x)f2(y )dxdy
g(x;y)z
(4.21)
Provided the \transfer functions" g () of each component are known, these equations describe how to propagate the probability distribution functions for the signal power and noise power through the network. The probability distribution function for the integrity of decisions made at a detector can then be evaluated, again using Eqn 4.21, with the two random variables being Eb and N0. Note that some networks include several detectors that make interpretations of the 105
information at intermediate points along the path from source to sink. Any information symbols that are interpreted erroneously by an intermediate detector will be received in error at the next detector before any interpretation is even performed. The net error probability (integrity) is the combination of the errors incurred at each detector. The probability distribution of these errors is once again calculated using Eqn 4.21, where the random variables are now the error probabilities for the decisions at each detector. The probability distributions for the integrity of information transfers between a given number of identi ed O-D pairs at a variety of dierent rates can thus be calculated. These distributions de ne the availability of providing this information transfer service.
4.4.1 Example Capability Calculation for a Ka-Band Communication Satellite Consider the information ow through a typical satellite from one of the proposed Ka-band communication systems. Figure 4-2 shows a possible network diagram for one such satellite. The modeled system parameters are given in Table 4.1, and correspond closely to those of a single satellite from the Spaceway system, proposed by Hughes Communications Inc., [40], [41], [42]. Starting at the left hand side of this diagram, consider rst the uplink from the users to the satellite. The modeled satellite employs a TDM/FDMA scheme for each of 48 uplink spot beams. This means that each user transmits information within a speci ed frequency band, and at speci ed times. This isolates the dierent users of each spot beam. Note that since the maximum transmitted power of the user terminals is limited, the energy per symbol depends on the user transmission rate. Each signal then passes through the atmosphere, which attenuates the power (and introduces noise) by varying degrees depending on the local climate, the frequency of the RF carrier, and the elevation angle of the line of sight. The probability distribution for the likely attenuation can be predicted reasonably well using the familiar Crane rain attenuation model [43]. There is additional attenuation from free-space loss, again with a probability distribution due to the distribution of elevation angles for users within the eld of view. The power of the signal arriving at the satellite antenna therefore has a statistical distribution. Noise power from thermal noise and cross-source interference (imperfect signal isolation) lead to small average signal-to-noise ratios. The power of each signal entering the digital signal processor (DSP) is therefore weak and varying. The DSP must detect the information symbols, and reroute them to their destination. Recall that the integrity of the detection process scales exponentially with Eb=No. There is therefore a statistical distribution for the BER of each signal leaving the DSP, and the distribution will be dierent for dierent 106
Table 4.1: System parameters for a modeled Ka-band communication satellite Units
Value
Miscellaneous System Parameters Mission Market Number of satellites Orbit
Broadband communications Western European residential users 1 25o E GEO
Uplink parameters Multiple access scheme Modulation Frequency USAT5 EIRP Number of uplink spot beams Satellite antenna gain System temperature Losses
Spot beams + TDM/FDMA QPSK, 1/2-rate Viterbi error correction GHz 30 dBW 44.5 48 dB 46.5 dBK 27.6 dB 1.5
Downlink parameters Multiple access scheme Modulation Frequency Number of downlink spot beams Channels per beam Channel bandwidth Channel capacity Satellite EIRP USAT antenna gain System temperature Losses
Spot beams + TDMA QPSK, 1/2-rate Viterbi error correction GHz 20 48 1 MHz 125 Mb/s 92 dB 59.5 dB 43 dBK 24.4 dB 1.5
user information rates. The downlink involves a single TDM wideband carrier for each of the 48 spot beams. The net information rate of this downlink is the sum of the rates for all users within the beam. This means that the energy per symbol of the downlink stream is a function of both the user information rate and the number of users. A larger numbers of users at a higher rate per user results in a lower energy per symbol. The downlink signal is also attenuated by the atmosphere, free space loss and interference. Individual end-users must demultiplex the received signal, extracting only the parts relevant to them. Here, isolation of the correct information signal depends on the stability of the oscillators in the user terminals. Extraction of the wrong information is eectively a multiple-symbol error. The subsequent interpretation of information symbols is sensitive to the received energy per symbol. Recall however, that some symbols were interpreted erroneously by the satellites. These symbols are received at the user terminals in error before any interpretation is even performed. The net symbol error rate is therefore a combination 107
of the errors incurred at the satellite and at the user terminals. In this example, the rate of information transferred through the system for each O-D pair is a design decision. The integrity of that information, as measured by the symbol error rate, has a statistical distribution depending on the number of users, and the rate at which they transmit. The resulting availability of service varies across the range of operating conditions. The Capability characteristics for this network are shown in Figure 4-11, for two dierent rates and two dierent numbers of users. The Capability characteristics shown here were calculated using elevation angle statistics for users distributed across Western Europe, accessing a Geostationary satellite located at 25oE longitude. These characteristics can be used to determine the maximum number of users that the system can support at a particular rate and integrity. Note that the availability for 3000 users at T-1 rates (1.544 Mbits/s) is below 95% over all BER's of interest6 . This is a result of the demand exceeding the downlink capacity of the satellite. Users must then be queued, reducing their eective availability.
4.5 Generalized Performance The formulation of the Capability characteristics allow us to calculate the generalized Performance of satellite systems. Performance is perceived in terms of satisfying the demands of a market. This demand is represented by a set of functional requirements, speci c to an individual information transfer. The requirements specify minimum acceptable values for:
Signal isolation Information rate Information integrity Availability of service at the required isolation, rate and integrity. Since the de nition of availability implicitly includes values for the other characteristics, these requirements simply enforce that, for a speci ed level of isolation, rate and integrity, the availability of service exceeds some minimum value. For instance, consider the market for mobile voice communication. Typically, the requirement is that individual users have at least 95% probability of being able to transmit and receive from small, mobile terminals at a rate of no less than 4800b/s with a maximum BER of 10,3 . Note that the isolation requirement enforces that the system be able to address each mobile, individual user within the distributed market. Also note that these functional requirements make no reference to 6 It
is generally assumed that BER's of 10,9 or 1010 are acceptable for broadband services
108
Model:Spaceway2. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Spaceway2. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 4-11: Capability characteristics for a modeled Ka-band communication satellite 109
the size of the market being served; they simply specify the quality of service that must be provided to the users. Performance should always be de ned relative to these requirements. To be unambiguous and quanti able, Performance should represent the likelihood that the system can satisfy the functional requirements for a certain number of users from a given market. In short, The Performance of a system within a given market scenario is the probability that the system instantaneously satis es the top-level functional requirements that represent the mission objectives.
It is important to note that Performance is distinct from Capability, although the two are related. The Capability characterizes a particular networks' ability to transfer information between a given number of identi ed users at dierent rates and integrities. There is no implicit reference to requirements within the de nition of Capability, and component reliabilities are not re ected. However, a measure of Performance should include all likely operating states, and so reliability considerations are necessary. The existence of component failures means that every system has many possible network architectures corresponding to failures in dierent components. Each network, or system state, is de ned only by the components that are operational. Each of these states will have dierent capabilities. By specifying requirements on isolation, rate and integrity, the Capability characteristics can be used to determine the availability of service oered by each state, for dierent numbers of users. If the supported availability exceeds the minimum acceptable availability speci ed by the functional requirements, that system state is deemed \operational". The mathematical formulation of the generalized Performance follows immediately, The generalized Performance for a given market scenario is simply the probability of being in any operational state.
The Performance can be improved therefore by either reducing the impact of any component failures that could occur, or by improving the component reliabilities so that these failures are less likely. The former approach eectively increases the number of operational states, while the latter reduces the probability of transitioning to a failure state. The impact of component failures, blockage, or rain/cloud cover can be reduced if there are redundant information paths. This redundancy can be provided by distributed architectures featuring multi-fold coverage. For example, a mobile communication user can select, from all of those in view, the operational satellite with the clearest line of sight. This can reduce service outages and improve availability. This concept extends across almost all applications. 110
4.5.1 Time Variability of Performance The Performance can be quanti ed for each year over the lifetime of the satellite system to give the Performance pro le. The Performance of the system generally changes in time as a result of three factors:
There are typically dierent rate, integrity and availability requirements placed on a satellite system at dierent times within its life. Consequently, the functional requirements are properly speci ed as an availability pro le.
System components have nite failure probabilities that generally increase in time; once on orbit, a satellite system is dicult to repair. There is a higher probability of being in a failed state with a degraded availability late in the lifetime.
The number of users targeted by the system will usually change over the lifetime. As shown in previous sections, the supported availability of a system is a strong function of the number of users.
These trends can compound to give large variations in the Performance over the system lifetime.
4.6 Calculation of the Generalized Performance Since the context of its de nition includes the notion of state probabilities, the calculation of the generalized Performance is well-suited to Markov modeling techniques that determine the probability of being in any particular state at a given time. In general, Markov calculations rely on the fact that the state probabilities P~s (t1 ) at some future time t1 depend only on the current state probabilities P~s (t0 ) and on the rate of state transitions [44],
P~s (t1 ) = A P~s (t0)
(4.22)
where A is the state transition matrix. Determination of this matrix requires the characterization of each state as an operational state or a failure state, since there are no transitions from failure states. Herein lies the only complication in calculating the generalized Performance compared to conventional Markov modeling. In order to ascertain whether a state is operational, the Capability characteristics of that state must be calculated and compared to the requirements. Since this is non-trivial, generation of the state transition matrix involves a large amount of computation, and in most cases dominates over the computations involved in the actual solution of Eqn. 4.22. The complexity of the Performance calculations therefore grows linearly with the number of possible states, since each must 111
be investigated. However, the number of possible states increases geometrically with the number of failure transitions and the number of system components. For this reason, the models usually include fewer than ten failure transitions from a subset of the most critical system components. State aggregation techniques can also be used to reduce the number of computations.
4.6.1 Example Performance Calculation for a Ka-Band Communication Satellite To illustrate calculation of the generalized Performance, return once again to the broadband communication system of Figure 4-2 and Table 4.1. For demonstration purposes, let us assume that the users of the system require availability of at least 98% for communication at a data rate R = 1:544Mbit/s, and a BER of 10,9 . Using reasonable values for the failure rates of the most critical system components, the failure states corresponding to a violation of these requirements and the associated probabilities can be calculated. For this simple system, there are basically two dierent types of failure state; those that correspond to degraded payload operations that violate the requirements, and those that constitute a total loss of the satellite. These two scenarios can be modeled separately to simplify the analyses. Consider rst the failure states corresponding to degraded operation of the satellite payload. The most failure-prone components along the primary information path through the network are the satellite DSP's and the satellite transmitters. The system shown in Figure 4-2 features 48 channels for each of these; one pair for each spot beam, with cross-connections to remove serial failure modes. Note that this is not representative of the proposed Kaband systems that have multiple, redundant DSP's and transmitters. However, since this is only an example calculation to demonstrate the process, there is some merit in using a non-redundant con guration; to minimize the number of failure and operational states and to illustrate the impact of non-redundant designs. Typical failure rates for a conventional communications payload, as given in SMAD [3], are 0:052 per year. This value would seem reasonable for a single transmitter channel using solid state power ampli ers. A higher failure rate of = 0:1 is chosen for each DSP channel, since it represents a new satellite technology. However, it is arbitrarily assumed that only one out of every ve DSP failures are unrecoverable. The eective channel failure rates used in the calculation of Performance for this system are therefore TX = 0:052, DSP = 0:02. If the satellite targets 2500 users with the stated set of requirements, the resulting failure states and their probabilities over the system lifetime are shown in Figure 4-12. 112
Failure state1 = (7*Tx) Failure state2 = (1*DSP) Failure state3 = (1*DSP,1*Tx) Failure state4 = (1*DSP,2*Tx) Failure state5 = (1*DSP,3*Tx) Failure state6 = (1*DSP,4*Tx) Failure state7 = (1*DSP,5*Tx) Failure state8 = (1*DSP,6*Tx) 1 Pf
0.9
0.8
0.7
Probability
0.6
0.5
0.4
0.3
FS2 FS3
0.2
FS4
0
FS5
FS1
0.1
0
1
2
3
4
5 6 Time (years)
FS6
7
8
FS7
FS8 9
10
Figure 4-12: Failure state probabilities for a modeled Ka-band communication satellite
payload: R = 1:544Mbits/s, BER = 10,9, Av = 98%.
113
There are eight unique failure states. Seven of these states feature failures in a single DSP channel and up to six transmitter channels. The remaining state is characterized by seven transmitter failures. System failure occurs when either a DSP fails or seven transmitters fail, whichever occurs rst. The system can tolerate up to six transmitter failures by allocating extra trac through the remaining channels, up to the bandwidth limit of 92 Mbit/s. A DSP channel failure results in an unavoidable loss of throughput, since it is assumed that multiplexed data streams from the satellite receivers cannot be split up and redistributed upstream of the remaining DSP channels. Notice from Figure 4-12 that, with the failure rates used, the probability of system failure is very high and approaches unity after only three or four years. This is dominated by the reasonably likely probability of a single DSP failure. This is the reason for implementing redundancy in all information paths through a network. The current plans for Spaceway include fully redundant cross-connected DSP's and 64-for-48 redundancy in the transmitters. Such levels of hardware redundancy, mated with technological improvements that reduce the component failure rates, result in a very small probability of payload failure (less than 0.1 over 10 years). It can be assumed that a sensible design would feature such redundancy, and at least for this example, we can ignore the eects of degraded payload operations. The second type of system failure corresponds to a total loss in operational capability of the satellite. This satellite vehicle failure, SVF, can occur when the support modules, de ned in Section 4.2.2, fail to provide the functional modules with essential resources. For example, the power and propulsion subsystems, the guidance and navigation subsystem (G&N), and the spacecraft control computer (SCC) must all work under normal operations. Calculating the probability of SVF simply involves building a simple model of the satellite bus resources. For this simple example, the spacecraft was modeled to include two parallel SCC's, two G&N's, and an integrated bus module representing propulsion, power and structural components. One out of every ten failures in the G&N and SCC are assumed to be unrecoverable. The equivalent channel failure rates, again taken from SMAD [3], are SCC = 0:0246, G&N = 0:0136, and bus = 0:072, all per year. The resulting probabilities of the SVF modes are shown as a function of time in Figure 4-13. For this example, the failure probability is dominated by the probability of a bus failure. The overall probability of satellite failure exceeds 0.5 after 10 years. This value is perhaps high compared to existing Geostationary satellite systems, although several failures, attributed to SCC's and power subsystems, have recently been seen in similar designs after less than 5 years in orbit. The generalized Performance of this example satellite system is the complement of the net failure rate, dropping from unity at time 0, to a value just less than 0.5 after 10 years. 114
Failure state1 = (2*SCC) Failure state2 = (1*G&N,2*SCC) Failure state3 = (2*G&N) Failure state4 = (2*G&N,1*SCC) Failure state5 = (1*Bus) Failure state6 = (1*Bus,1*SCC) Failure state7 = (1*Bus,1*G&N) Failure state8 = (1*Bus,1*G&N,1*SCC) 0.7 Pf
0.6
Probability
0.5
0.4 FS5 0.3
0.2
0.1 FS6 FS7
FS1
0
0
1
2
3
FS2 FS3 FS4 4 5 6 7 Time (years)
FS8 8
9
10
Figure 4-13: Failure state probabilities for a modeled Ka-band communication satellite
115
4.7 The Cost per Function Metric The Cost per Function (CPF) metric is perhaps the most important concept introduced within this analysis framework. Its de nition is completely generalizable and straightforward: The cost per function metric is a measure of the average cost incurred to provide a satisfactory level of service to a single O-D pair within a de ned market. The metric amortizes the total lifetime system cost over all satis ed users of the system during its life.
The mathematical form of the metric follows immediately from this de nition, and is the same across all applications, Lifetime Cost CPF = Number of Satis ed Users
(4.23)
Note that number of users of the system is represented by the number of O-D pairs and the information symbols they exchange. For example, the number of users of a communication service is de ned by the total number of bits transferred through the system. Equivalently the number of users for a space based search radar is the total area that is searched. However, this alone is insucient, since the de nition of a market implicitly includes minimum requirements on the isolation between sources, and the rate and integrity of the information being exchanged. Users within the market are only satis ed when the information transfers occur between the correct O-D pairs at the correct rate and with the correct integrity. The metric is therefore based on the number of satis ed users, referring to the total number of symbols transferred through the system that satisfy requirements. Before proceeding, it is helpful to introduce some examples of the CPF for dierent applications, in order to concrete understanding of the principal terms. Table 4.2 summarizes the CPF for a mobile voice communication system [4], a broadband communication system [5], a surveillance radar system for the detection of ground moving targets (GMTI), and an astronomical telescope [6], [10]. Table 4.2: Cost per Function metrics for example applications Mobile comms Broadband comms GMTI radar Astronomical telescope
Cost per Cost per Cost per Cost per Cost per
Satis ed Billable Billable Protected Useful
User voice-circuit minute T1-minute km2 of theater Image
Both of the communication systems must support a quality of service that people will be willing to pay for; a service that is \billable". The market for voice requires symbol rates 116
that can support a voice-circuit, de ned as a full duplex voice connection of predetermined quality between two users. The quantity of these voice-circuits can be measured in minutes. For broadband service, the information rate must be higher, with multimedia applications requiring data rates around T1 (1.544 Mbit/sec). The surveillance radar must provide a level of service that allows a theater of a given size to be adequately protected. This requires that each square kilometer be \safety-checked" every minute. The total number of protected square-kilometers is then total area protected each minute, multiplied by the number of these minute-long intervals in the lifetime of the system. As a result, the time dimension is not explicitly stated in the metric, but is implicit in the de nition of \protected". Similarly, for the telescope the concept of the \useful image" implies a satisfactory resolution, update rate and image integrity. Again, the time dimension does not appear explicitly, being swallowed by the \useful" construct. In every case, the CPF has the dimensions of dollars-per-information symbol. Recall however that information symbols represent users of the system. Therefore, although the dimensions of a symbol are strictly bits, a symbol generally has an interpretation, such as a voice-circuit or an image. Indeed, by de nition, the dimensionality of the CPF metric must be equivalent to dollars-per-user.
4.8 Calculating the Cost per Function Metric In order to calculate the Cost per Function metric, the impact of improved Performance on the cost of a system must be determined. If the value of Performance can be quanti ed, the system cost can be modi ed to correspond to a common level of Performance. The modi ed system cost should represent the total lifetime cost of a system, where lifetime cost is de ned to be the total expenditure necessary to continuously satisfy the top level system requirements.
4.8.1 The System Lifetime Cost The baseline cost Cs accounts for the design, construction, launch, and operation of the system components. This baseline cost does not however account for the expected cost of failures of system components. Since the system must satisfy requirements throughout its design life, expenditure will be necessary to compensate for any failures that cause a violation of this condition. These additional failure compensation costs Vf [3] must be added to the baseline system cost to give the total lifetime cost CL,
CL = Cs + Vf 117
(4.24)
As long as it is used consistently, any parametric cost model can be used to calculate the baseline system cost. Note that a premium is paid for more reliable components. Since some costs are incurred at dierent times within the lifetime of the system, the cost is actually represented as a cost pro le. This pro le has to be modi ed to account for the time value of money. Costs incurred later in the system lifetime have a lesser impact on the overall system cost. A dollar is always worth more today than it is tomorrow; capital expenditure can earn interest if invested elsewhere. The yearly costs are therefore discounted according to an assumed discount rate corresponding to an acceptable internal rate of return (IRR). In order to attract investors to commercial systems, the high risk associated with space ventures necessitates a high IRR of around 30% [45]. For government projects, a discount rate of 10% is often used in costing analysis [3]. The discounted cost pro le cs (t) must then be integrated over the system lifetime to obtain the total baseline system cost Cs , X
Cs =
life
cs (t)
(4.25)
4.8.2 The Failure Compensation Cost The failure compensation cost Vf can be estimated from an expected value calculation,
Vf = E [Vf ] =
X
X
life states
ps (t) vs (t)
!
(4.26)
where ps (t) is marginal probability of entering failure state s at time t and vs (t) is the sum of the economic resources required to compensate for the failure. Strictly, this calculation should involve all likely failure states. However, for complex systems this is prohibitive. A reasonable approximation is to truncate the model and include only the states representing the most likely failure modes. Note that vs includes the costs of replacement satellites or components, launch costs and any opportunity costs representing the revenue lost during the downtime of the system. The calculation of vs is architecture speci c, and in most cases depends strongly on the nature of the failure mode. A failure mode and eects analysis (FMEA) may be required to estimate the replacement costs. Estimation of the opportunity costs is dicult, requiring a prediction of the failure duration. Despite these problems, vs can be estimated with reasonable con dence using predictive methods and for simple systems, and simulations for more complex systems. Of course, good market models are also required. The marginal probabilities ps (t) of the most likely failure states are the derivatives of the failure state probabilities Ps (t) that are evaluated during the Markov calculation for 118
the generalized Performance. It is therefore through the failure compensation costs that Performance impacts the system lifetime costs. A higher Performance system will have a lower probability of transitioning to a failure state and consequently a lower expected value of the compensation costs.
4.8.3 The System Capture In a perfect market scenario, the system capture Mc (equivalent to the total number of satis ed users) can be chosen using the Capability characteristics and the Performance pro les. This would simply be the maximum number of users that the system could satisfy, given a set of requirements. Essentially there is a trade-o between providing basic service to a large number of users or ensuring high Performance to a small number of users. For example, a system that can serve a small number of users with a high probability could instead target a larger number of users at a lower (but still satisfactory) availability. This strategy carries the risk of being more sensitive to component failures, essentially incorporating less performance redundancy. The optimum strategy depends on the expected revenue and the estimated compensation costs, and in particular the opportunity costs associated with dissatis ed customers. Note however that it is usually incorrect to assume a perfect market, and it is then necessary to include a comparison to the size of the expected market. This step is called demand matching and is critical because a system cannot outperform the demand. Extra capacity beyond the market size brings no additional revenue or bene t, but may incur increased costs. Comparing the design capacity to the size of the local demand and taking the minimum gives the achievable capacity of the system. This is de ned as the market capture. Since the size of the local demand Q is almost always time and spatially varying, the demand matching calculation involves an integration over the entire coverage region for each year of the satellite lifetime, to give a market capture pro le mc (t),
mc (t) =
X
market
min [design capacity; Q]
(4.27)
Recall that the Performance, and hence failure compensation depended strongly on the number of users addressed. Of course, the opportunity costs associated with lost revenue during down time is also dependent on the number of addressed users. The market capture pro le can be used to determine the maximum number of users that can be served at dierent times over the lifetime of the satellite. A further complication arises if the system operation results in monetary income, as is the case for commercial communication systems. In this situation, the time value of money 119
means that there is also a bias in the relative \value" of market capture, with a weighting toward the start of the systems lifetime. In general, revenue should be earned as close as possible to the time that the associated costs are incurred. For example, revenue earned from the transmission of bits early in the life of a communication satellite are more important than revenue earned late in the lifetime. For this reason, for each year of the lifetime of the satellite, the capture pro le mc (t) must also be correctly discounted according to the same discount rate as was used to discount the costs. The total number of satis ed users or system capture Mc is then calculated by summing the capture pro le over the entire lifetime of the system,
Mc =
X
life
mc (t)
(4.28)
Having determined the total system capture, the CPF can now be calculated,
CL CPF = M
c
(4.29)
4.8.4 Example CPF Calculation for a Ka-Band Communication Satellite The cost per billable T1-minute is the CPF metric used in the analysis of broadband satellite systems. It is the cost per billable T1-minute that the company needs to recover from customers through monthly service fees, ground equipment sales, etc., in order to achieve a speci c (30%) internal rate of return. Once again referring to the example Ka-band system described in Table 4.1, the cost per billable T1-minute can be calculated from an estimation of the system's market capture and the system costs. The system is assumed to reach initial operating capability (IOC) in 1999, and be active through the year 2010, requiring a satellite lifetime of 12 years. The calculations are all performed in scal year 1996 dollars (FY$96), since this would represent a reasonable project inception date, given an IOC in 1999. All costs are adjusted using the Oce of the Secretary of Defense estimates [3], and discounted back to a present value in 1996 with a 30% discount rate. Consider rst the evaluation of the achievable market capture of the system. The market capture depends on the size of the market accessible to the system and on the system Capability characteristics. The limiting eects of market demographics, access and exhaustion can be quanti ed only with an adequate market model. For an earlier study, Kelic, Shaw and Hastings [5] constructed several reasonable models for the global broadband communications market, based on current and projected internet usage and computer sales growth. Using these market models, computer simulations of several broadband satellite systems have been performed to estimate their market capture. Figure 4-14 shows the 120
resulting market capture pro le for the modeled satellite. 3000
T1-connections
2500
2000
1500
1000 Exponential growth market model
500
"Last-mile" market model 0 1998
2000
2002
2004
2006
2008
2010
2012
Year
Figure 4-14: Market capture pro le for a modeled Ka-band communication satellite.
The two market models represent dierent projections for the size and distribution of the European residential broadband market
The achievable capacity of the satellite initially grows as the market develops. After 2005, the market capture saturates at around 2800 simultaneous users. If additional users were addressed, the supported availability would drop below requirements, as seen in the Capability characteristics of Figure 4-11. The total market capture is the sum over all years of the market capture pro le, after discounting at a rate of 30% per year to represent the net value of the revenue stream in 1996. For the exponential market model, the resulting market capture in equivalent scal year 1996 simultaneous T1-users is only 2560. Note that this discounted total is smaller than the true value in any individual year from 2004 onwards. This is a direct result of the diminishing value, in real terms, of any revenue earned later in the lifetime of commercial projects. The value of Mc used in the cost per billable T1-minute metric is then simply this number of equivalent simultaneous T1-users, multiplied by the total number of minutes in a year, so that Mc = 1:346 109 T1-minutes. The total baseline cost of the satellite system is estimated including recurring and nonrecurring costs for development, construction, launch, insurance, gateways and control center operations, and terrestrial internet connections. The cost model used for this example is the same as that used by Kelic [5], drawing on industry experience and observed trends. The Theoretical First Unit (TFU) cost for communication satellites can be estimated rea121
sonably well assuming $77,000 per kg of dry mass. The non-recurring development costs for commercial systems can be approximated at three to six times the TFU cost, depending on the heritage of the design. For this example, launch costs to GEO can be assumed at $29,000 per kg, with insurance at 20%. For linking to the terrestrial network, each OC-3 (155 Mbit/s) connection costs $8,500 installation and $7,900 per month. This cost scales with the market capture. The expected failure compensation costs are calculated from the SVF probability pro le, pf (t), shown in Figure 4-13 and the market capture curves of Figure 4-14. A satellite failure can be assumed to result in the loss of a single years' revenue, together with the cost of building and launching a replacement satellite. The calculation of the opportunity costs from lost revenue requires an assumption for the average service charge per user. A conservative estimate of $0.05 per T1 connection is used for this example. The baseline system cost and the failure compensation costs can be summed to give cL , the system cost pro le. The baseline costs cs (t), failure compensation costs vf (t), and total system costs cL (t) are shown in Table 4.3. Table 4.3: System cost pro le for a single Ka-band communication satellite Year 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
cs ($M) 264.000 264.000 145.000 1.000 1.000 2.000 2.000 3.000 3.000 3.000 3.000 3.000 3.000 3.000
pf
0.070 0.067 0.063 0.059 0.056 0.052 0.049 0.046 0.043 0.040 0.037 0.034
vs ($M) vf ($M) 0.500 14.720 14.490 14.360 14.260 14.320 14.140 13.780 13.250 8.170 5.150 2.440
0.035 0.980 0.911 0.852 0.794 0.749 0.693 0.630 0.564 0.324 0.190 0.083
cL 264.000 264.000 145.035 1.980 1.911 2.852 2.794 3.749 3.693 3.630 3.564 3.324 3.190 3.083
Discounting the system cost pro le at 30% per year gives the net present value of the costs in scal year 1996 dollars. Summing over all years of the discounted pro le gives the total lifetime cost, CL = $429M . The cost per billable T1-minute metric for this system in an exponentially growing broadband market, is therefore simply,
CL = $0:32 Cost per billable T1-minute = M c
This implies that the company must be able to charge users at least 32 cents per minute for this broadband service in order to obtain a 30% return on the investment. 122
4.9 Utility of the Cost per Function Metric Part of the utility of this cost per function metric is that it permits comparative analysis between dierent systems with large architectural dierences, scaling their cost according to their Performance and market capture. Very large and ambitious systems can be fairly compared to smaller, more conservative systems. The cost per function metric can also be used to assess the potential bene ts of incorporating new technology in spacecraft designs. New technology should only be included in the design of a new satellite if it can oer reduced cost or improved Performance. This can be evaluated with the metric, provided that both the cost and the expected reliability of the new technology can be estimated. Commonly, the largest problem encountered with incorporating new technology in space programs is schedule slip. This can have an adverse eect on the overall success of the program, extending the period of capital expenditure, while delaying operations that bring revenue. These eects can also be captured by the cost per function metric. Some typical amount of program slip can be included in the cost pro le cs (t) , and the corresponding delay can be applied to the market capture pro le. The combined eects of including the new technology will then be apparent, by comparing the cost per function metric to those corresponding to designs featuring more established technologies.
4.10 The Adaptability Metrics The adaptability metrics judge how exible a system is to changes in the requirements, component technologies, operational procedures or even the design mission. It is convenient to de ne two types of adaptability, dierent in both their meaning and their mathematical formulation.
Type 1 adaptability assesses the sensitivity of the Capability, cost and Performance of a given architecture to realistic changes in the system requirements or component technologies. A quanti able measure of this sensitivity allows the system drivers to be identi ed, and can be used in comparative analyses between candidate architectures. As will be shown in this section, the mathematical form of the Type 1 adaptability also makes it entirely compatible with conventional economic analyses of commercial ventures. This adds enormous utility to the metric for investment decision-making and business planning.
Type 2 adaptability measures the exibility of an architecture for performing a
dierent mission, or at least an augmented mission set. This is particularly important for government procured systems. In todays budget controlled environment, expensive 123
military and civilian space assets must be able to ful ll multiple mission needs cost eectively. Each of these two types of adaptability has a quanti able mathematical de nition that is a simple extension of the CPF metric.
4.10.1 Type 1 Adaptability: Elasticities Concisely stated, Type 1 adapatabilities represent the elasticity of the CPF metric with respect to changes in the requirements or the component technologies. Elasticity is a mathematical construction most often used in microeconomics. To introduce and formalize notation, it is valuable to brie y summarize the concept of elasticity within the conventional context of microeconomics. Elasticity is de ned as the percentage change that will occur in one variable in response to a one percent change in another variable [46]. For example, the price elasticity of demand measures the sensitivity of the demand for a product to changes in its price, and can be written
Q=Q = P Q Ep = P=P Q P
(4.30)
where Q is quantity of demand and P is price. Most goods have negative elasticities since price increases result in demand decreases. If the price elasticity is greater than one in magnitude, the demand is termed price elastic because the percentage change in the quantity demanded is greater than the percentage change in price. Consequently, a reduction in the price results in an increase in the total expenditure since disproportionately more goods are sold. An increase in the price results in a reduction of total expenditure as much fewer goods are sold. Conversely, if the price elasticity is less than one in magnitude, the demand is said to price inelastic, and the opposite trends are observed. Finally, a value of unity for the elasticity implies that the total expenditure remains the same after price changes. Any price increase leads to a reduction in demand that is just sucient to leave the total expenditure unchanged. Eqn. 4.30 speci es that the elasticity is related to the proportional change in P and Q. The relative sizes of P and Q change at dierent points on the demand curve. Therefore the elasticity must be measured at a particular point, and will usually have very dierent values at dierent points along the curve. This of course means that the elasticity for a change in price from P1 to P2 can be quite dierent from the elasticity calculated in the other direction, from P2 to P1 . To avoid this confusion, the arc elasticity represents the average elasticity over a small range, 124
Q=Q = (P1 + P2 ) Q Ep = P=P (Q + Q ) P 1
2
(4.31)
The choice between using point elasticities and arc elasticities is really the prerogative of the engineer. In general, the arc elasticity is a more consistent measure of sensitivity. For the remainder of this document, the term elasticity is taken to imply arc elasticity, and the overbar is omitted from equations. Return now to the generalized analysis framework. Analogous to the elasticity of demand, the elasticity of the CPF metric is the percentage change in its value in response to a one percent change in some other relevant variable. The \relevant variable" here may be a system requirement, or a system component parameter. Indeed, it is straightforward to formulate the dierent requirement elasticities of the CPF at a given design point,
=CPF Isolation Elasticity, EIs = CPF Is=Is =CPF Rate Elasticity, ER = CPF R=R CPF =CPF Integrity Elasticity, EI = I=I =CPF Availability Elasticity, EAv = CPF Av=Av
(4.32) (4.33) (4.34) (4.35)
where Is, R, I , and Av are the set of system requirements on isolation, rate, integrity and availability. Note that CPF is the change in the CPF value as a result of changing a system requirement, and is formed by direct subtraction of the CPF values for the two dierent cases. However, the denominator of the CPF metric carries an implicit reference to these same system requirements, as discussed in Section 4.7. It is initially tempting therefore to question the validity of simply subtracting two CPF values that have entirely dierent denominators. The solution to this apparent problem lies in the fact that the CPF metric is de ned as the cost per satis ed user. The denominators in all CPF metrics are therefore equivalent to a single user, and CPF can be calculated directly. For example, consider the service options that can be provided by a broadband communication system. The cost per billable T1-minute can be compared directly with the cost per 14 -T1-minute without any modi cations. The dierence in value CPF represents the dierence in cost that must be charged to each broadband user if the data rate provided to them is changed. In a similar fashion, the technology elasticities can be de ned. These can be formed for any particular component of the system that may have an impact of the overall performance 125
or cost. Example technology elasticities are shown below,
=CPF Launch Cost Elasticity, EClaunch = CPF Claunch =Claunch =CPF Manufacture Cost Elasticity, ECmfr = CPF Cmfr=Cmfr =CPF Reliability Elasticity, ERs = CPF Rs=Rs
(4.36) (4.37) (4.38)
where Claunch is the budgeted launch cost for the system, Cmfr is the manufacturing cost, and Rs is the satellite reliability. In each case, some technology is varied, while the system requirements are held constant. Technology elasticities can be formed for each essential system component, re ecting the likely changes in available technology, or the variations in the system parameters that span the design trade space. This allows a quanti able assessment of design decisions and can identify the most important technology drivers.
Utility of the Elasticities for Economic Analysis The mathematical form of the elasticities are identical to the conventional elasticities used in econometric analysis. This allows the results from a generalized analysis of a proposed satellite system to be used in the investment decision making process. For example, consider a broadband communication system that had been originally planned to provide users with 41 T1 connections. The marketing department then suggests that providing a full T1 connection would give the company a competitive advantage over all others in the marketplace. In addition, they have all the demand curves to prove it. The system engineer can respond by calculating the rate elasticity of the CPF, as described above, for a change from 1 T1 to full T1. Since the CPF represents the average cost to provide service to a user, it 4 can be taken to be a surrogate for price. The rate elasticity of CPF (or price) can therefore be multiplied by the price elasticity of demand, calculated from the demand curves, to give some number X that represents the change in demand in response to the increase in price associated with improved service. Comparing this value to the rate elasticity of demand exhibited by the demand curves, a decision can be made about the rate that maximizes revenue. If X is higher than the rate elasticity of demand, then an increase in the rate results in a disproportionately larger increase in the price, averting more customers than are attracted by the improved service. Marketings' idea to oer higher rates can be crushed by the system engineer. Alternatively, if X is smaller than the rate elasticity of demand, the engineer can con rm marketings' suggestion with quantitative numbers. Either way, the correct decision can be made, and the engineer looks good! 126
4.10.2 Type 2 Adaptability: Flexibility Type 2 adaptability corresponds to the change in the CPF of a system as the design role is changed or augmented. Recall that a mission is de ned by a market and a set of associated derived system requirements. A change in the design mission therefore represents a change in the market addressed and all the system requirements. A classical elasticity formulation that relates a proportional response to proportional variations in the input cannot be constructed because there is no obvious scalar representation of the input variations. Instead, Flexibility F is simply de ned to be the proportional change in the CPF in response to a particular mission modi cation, CPF (4.39) F jX = CPF X where X is just an identi er to specify the mission modi cation. This is a useful metric for comparing competing designs since it measures just the sensitivity of the CPF to mission modi cations, normalizing any dierences in the absolute values of the initial CPF's. The
exibility can be an important factor in deciding between alternate architectures during the conceptual design phase of a program, especially if the mission is likely to change over the lifetime. For example, an architecture that is highly optimized for the baseline mission may have a low CPF but a very high exibility, implying it is very unsuited to perform any other modi ed mission. In all but the most predictable markets, a more prudent design choice would be a less optimized system with a lower exibility, even at the expense of a higher CPF.
4.11 Truncated GINA for Qualitative Analysis For purely qualitative analysis, the GINA methodology can be truncated signi cantly, while still providing the engineer with valuable insight. In particular, mapping the application into the generalized framework organizes the thought process and allows an unambiguous comparison to be made between competing architectures. The most important discriminators between the systems will be clearly apparent, allowing attention to be focused on the de ciencies or bene ts of each architecture. For example, Table 4.4 shows a qualitative comparison of two very dierent architectures that have been proposed for a space based radar to detect ground moving targets. Discoverer-II or simply \D-2", [47], proposed by Defense Advanced Research Projects Agency (DARPA), the National Reconnaissance Oce (NRO) and the Air Force, is a constellation of 24 satellites in LEO, each operating independently. The nominal design features satellites in the 1500kg class, with peak RF power of 2 kW and antenna area of 40 m2 , each 127
costing less than $100M. Advanced radar processing techniques, such Space-Time Adaptive Processing (STAP) will be used to cancel clutter for the Ground Moving Target Indicator (GMTI) mission and principles of Synthetic Array Radar (SAR) will support terrain imaging. On the other hand, Techsat21 [48], as proposed by the Air Force Research Laboratory (AFRL) features symbiotic clusters of small satellites (approximately 100 kg, 200W RF, 1 m2 of aperture) that form sparse arrays to perform the same mission. The number of clusters is at the moment undecided, depending on the eventual coverage requirements, but for comparison purposes can be taken to be the same as the number of satellites in D-2. The concept was introduced in Section 3.1.2 and is the dedicated focus of Chapter 7. Table 4.4 shows that there are several signi cant discriminators between these two architectures. Table 4.4: Qualitative comparison between Techsat21 and Discoverer-II space based
radar concepts using truncated GINA. The discriminators between the architectures are written in slanted font. Classi cation Isolation
Clutter compensation Resolution
Discoverer-II Collaborative constellation, ns = 1
Techsat21 Symbiotic clustellation, ns =8{16
Clutter cancellation though adaptive Clutter rejection through sparse aperclutter processing, nulling, etc. ture synthesis giving narrow main lobe and low sidelobes Limited by PRF and antenna Limited by sparse aperture dimensions beamwidth (cluster dimensions)
Rate (search rate) Large aperture has small FOV, and so Small apertures have wide FOV that supports a small ASR, unless a small can be lled with multiple receive dwell time can be tolerated beams so ASR can be high
Integrity (PD )
High power needed to overcome ther- n2s coherent processing gain allows mal noise lower power transmitters
Availability
Dominated by coverage statistics (ac- Dominated by coverage statistics (access, range to target, grazing angle) cess, range to target, grazing angle)
Performance
Depends on reliability and survivabil- Improved reliability from in-built reity of single satellite dundancy and graceful degradation
CPF
Moderate number of large satellites leads to baseline costs around $3B. Poor performance leads to high failure compensation costs
Large number of small satellites leads to baseline costs around $3B. Higher performance leads to smaller failure compensation costs
Adaptability
Resolution, rate, and integrity are xed by power and aperture resources. System can easily support SAR imaging, but cannot perform airborne MTI
Capabilities can be improved by augmenting with more cluster satellites. Imaging is supported, and AMTI is possible with more satellites
128
4.12 The GINA Procedure { Step-by-Step The systematic procedure for applying the GINA methodology is summarized: 1. De ne the mission objective. What is the real application of the system, in terms of the user needs? 2. Map the user needs into the generalized Capability parameters of isolation, rate, integrity and availability. These de ne the features of the information transfer that are perceived by the users as quality of service. 3. Construct the network representation from a functional decomposition of the system. 4. Determine functional behavior of each module, in terms of what it does to impact the isolation, rate, integrity and availability. The modules generally interact with the information via the signal, noise and interference power. 5. Determine the statistical inputs to each module. Some of the modules require inputs relating to the system characteristics or other parameters, such as elevation angle, coverage or clutter statistics. 6. Choose a number of O-D pairs that will be served, and determine their isolation characteristics (domain of separation, spacing in that domain, signal spectrum etc.) 7. For that number of O-D pairs, calculate the integrity of information transfers for a variety of rates. These are the Capability characteristics. 8. Set values for the Capability parameters corresponding to user requirements for the market. 9. Assign failure rates to each functional module that represents real hardware. 10. Use Markov modeling to calculate the state probabilities corresponding to dierent combinations of failed components. The sum of the probabilities for those states that satisfy requirements is the generalized Performance. Those states that do not satisfy requirements are the failure states. 11. Calculate lifetime cost as the sum of the baseline cost and the failure compensation costs, which are the products of the failure state probabilities and the costs required to compensate for the failures. 12. For a realistic market scenario, calculate the market capture as the maximum number of users that can be addressed satisfactorily. 129
13. Calculate the CPF as the ratio of the lifetime cost and the market capture. 14. Calculate Adaptability metrics by repeating the analysis after changing either a requirement or a technology.
4.13 Summary A generalized analysis methodology has been developed that allows systems with dramatically dierent space system architectures to be compared fairly on the basis of cost and performance. The initial motivation was to undertake quantitative analyses of distributed satellite systems compared to traditional singular deployments. The framework is however very generalizable, and can be applied to all satellite missions in communications, sensing or navigation. The most important concepts of the Generalized Information Network Analysis (GINA) can be stated concisely7 :
Satellite systems are information transfer systems that serve O-D markets for the transfer of information symbols.
The Capabilities of a system are characterized by the isolation, rate, integrity and availability parameters.
Each market speci es minimum acceptable values for these Capability parameters. These are the functional requirements placed on the system.
Performance is the probability that the system instantaneously satis es the top-level functional requirements. It is here that component reliabilities make an impact.
The Cost per Function (CPF) metric is a measure of the average cost to provide a satisfactory level of service to a single O-D pair within a de ned market. The metric amortizes the total lifetime system cost over all satis ed users of the system during its life.
The Adaptability metrics measure the CPF sensitivity to changes in the requirements, component technologies, operational procedures or the design mission.
These concepts extend across almost all applications. In the next chapter the methodology is validated by applying it to the existing GPS system, and is then used in a comparative analysis of the proposed broadband communication systems, and nally a design study of a military space based radar. 7A
more detailed summary is included in the conclusions of Chapter 8
130
Part II
Case Studies and Results
131
The previous chapters have introduced a generalized analysis framework for distributed and traditional satellite systems and have de ned metrics for the quanti cation of cost and performance, capability and adaptability. It now remains to demonstrate the application of this methodology on some realistic space missions. All the results in the following chapters were produced using \GINALab", a Matlab/Simulink implementation of the generalized analysis methodology8 Throughout the previous chapters, the generality of the approach had been stressed. To prove this claim, in the next chapters the GINA technique is applied to communications, remote sensing and navigation missions. Furthermore, in addition to demonstrating the utility of the GINA method for comparative analysis of dierent systems that compete in the same market, it is also shown how it may be used during the conceptual design process. The proposed broadband communication systems provide the context for the comparative analysis, while the design study addresses the military need for a space based radar to detect ground moving targets. First though, the methodology must be validated by application to an existing distributed satellite system, giving results that are not only meaningful and reasonable, but also dicult to obtain by less sophisticated analysis techniques. The NAVSTAR Global Positioning System is ideal for this purpose, since it is a very complicated system with a large archive of measured data.
8 GINALab
is publicly releasable software developed by the author. To obtain a copy of the source code, contact Prof David Miller, Space Systems Lab, Dept of Aeronautics & Astronautics, MIT,
[email protected]
133
134
Chapter 5
The NAVSTAR Global Positioning System 5.1 System Overview This section introduces the operational concept of the Global Positioning System, and is provided to familiarize the reader with the important issues before proceeding with the generalized analysis. A great deal of the text in this section is taken from the excellent references \Global Positioning System: Theory and Applications" edited by Bradford Parkinson and James Spilker [49] and \The Global Positioning System|A Shared National Asset", [50] a National Research Council report on possible future improvements to the system. \Over a long Labor Day weekend in 1973, a small group of armed forces ocers and civilians, sequestered in the Pentagon, were completing a plan that would truly revolutionize navigation. It was based on radio ranging (eventually with millimeter precision) to a constellation of arti cial satellites called the NAVSTARs. Instead of angular measurements to natural stars, [a method used by mariners for six thousand years] greater accuracy was anticipated with ranging measurements to the arti cial NAVSTARs" [49]. The operational objectives of GPS were to provide:
High-accuracy, real-time position, velocity and time for military users on a variety of platforms, some of which have high dynamics, e.g. high-performance aircraft. \Highaccuracy" implied 20 m three-dimensional rms position accuracy or better.
Good accuracy to civilian users. The objective for civil user position accuracy was originally taken to be 500 m or better in three dimensions. 135
Worldwide, all-weather operation, 24 hours a day. Resistance to intentional (jamming) or unintentional interference for all users, with enhanced jamming resistance for military users.
Aordable, reliable user equipment. This eliminates the possibility of requiring highaccuracy clocks or directional antennas on user equipment.
A quarter century later, the Global Positioning System (GPS) is almost identical to that proposed in 1973 (although achieves better performance) and consists of three segments: the space segment, the control segment, and the user segment, as shown in Figure 5-1. The control segment tracks each NAVSTAR satellite and periodically uploads to the satellite its prediction of future satellite positions and satellite clock corrections. These predictions are then continuously transmitted by the satellite to the user as a part of the navigation message. The space segment consists of the 24 NAVSTAR satellites, each of which continuously transmits a ranging signal that includes the navigation message stating current position and time correction. The user receiver tracks the ranging signals of selected satellites and calculates a navigation solution [49].
Figure 5-1: The NAVSTAR GPS architecture (courtesy of the Aerospace Corporation)
[50]
The fundamental navigation technique for GPS is to use one-way ranging from the GPS satellites. A ground receiver simultaneously tracks several satellites using a low gain antenna 136
feeding a bank of matched lters. Pseudoranges are measured to at least four satellites simultaneously in view by matching (correlating) the incoming signal with a user-generated replica signal and measuring the received phase against the user's (relatively crude) crystal clock [49]. The actual observable is a pseudorange since it includes the user clock bias, ionospheric and tropospheric delays, plus relativistic eects and other measurement errors. The ionospheric group delay can be corrected by using dual frequency signals. The delay is proportional to the inverse square of the frequency, and so measurement at two frequencies allows its eect to be calculated. With ranges to four satellites and appropriate geometry, four unknowns can be determined: latitude, longitude, altitude, and a correction to the user's clock. If altitude or time are already known, a lessor number of satellites can be used [49].
5.1.1 The GPS Space Segment The GPS space segment consists of 24 satellites, in 6 orbital planes. The period of the orbits is 12 sidereal hours and the inclination is 55 degrees. This con guration was determined from the requirements of full global four-fold coverage of the Earth. Geostationary orbits were not used so that, in addition to the code phase/delay measurements, carrier phase/Doppler methods could also be used for navigation solutions. The GPS satellites are three-axis stabilized, and use solar arrays for primary power. The ranging signal is transmitted using a shaped beam antenna to illuminate the Earth with the same signal power at all elevation angles. The satellite design is mostly doubly or triply redundant, and the Phase I satellites demonstrated average lifetimes in excess of 5 years (and in some cases over 12) [49]. The Block II/IIA satellites that currently populate the constellation were built by Rockwell International Satellite and Space Electronics Division, and were designed to operate for 7.5 years. A typical Block II/IIA GPS satellite is shown in Figure 5-2. One of the enabling technologies for GPS was the development of extremely accurate timing sources that were portable enough to be placed on satellites. Indeed, the placement of a very stable time reference in a position where users have maximum access is the basis for modern satellite navigation. The rubidium and cesium atomic frequency standards used in GPS allow all the satellite clocks to remain synchronized to within one part in 1013 over periods of 1{10 days [49]. To give a sense of scale, this accuracy is equivalent to an error of about 1mm in the distance between the Earth and the Sun. 137
Figure 5-2: A typical Block II/IIA GPS satellite (courtesy of the Aerospace Corpora-
tion) [50]
5.1.2 The GPS Ranging Signal Each of the satellites transmits a ranging signal consisting of a low rate (50bits/sec) navigation message spread over a large bandwidth by a high rate pseudorandom noise (PRN) code. The resulting signal is used to modulate a carrier at two frequencies within the L-band: a primary signal at 1575.42 MHz (L1) and a secondary broadcast at 1227.6MHz (L2). These signals are generated synchronously, so that a user who receives both signals can directly calibrate the ionospheric group delay and apply appropriate corrections. The PRN spreading signals are chosen such that the signals from dierent satellites are orthogonal, providing a multiple access technique. Two dierent PRN codes are generated: 1. C/A or Clear Acquisition Code This is a short PRN code with period of 1023 bits, broadcast at a bit rate of 1.023MHz. This is the principal civilian ranging signal, and is always broadcast in the clear (unencrypted). It is also used to acquire the much longer P-code. The use of the C/A code is called Standard Positioning Service or SPS. It is always available, although it may be somewhat degraded. At this time, and for the projected future, the C/A code is available only on L1 [49]. 2. P or Precise Code This is a very long code with a period of 37 weeks (reset at the beginning of each week) and a bit rate of 10.23MHz, ten times that of the C/A code. Because of its higher modulation bandwidth, the code ranging signal is somewhat more precise. This signal provides the Precise Positioning Service or PPS. The military has 138
encrypted this signal in such a way that renders it unavailable to the unauthorized user. This ensures that the unpredictable code (to the unauthorized user) cannot be spoofed1 . This feature is known as antispoof or AS. When encrypted, the P code becomes the Y code. Receivers that can decrypt the Y code are frequently called P/Y code receivers [49]. The L1 signal carries both the C/A and the P signals as in-phase and quadrature components. The L2 carrier is biphase modulated by either the C/A or the P signal. The frequency spectra of the GPS ranging signals are shown in Figure 5-3. This gure is not exactly correct since the short period C/A code has a discrete spectrum with line components spaced at the code epoch rate (the code repetition rate is 1KHz). The more correct representation of the spectrum is discussed later during the generalized analysis.
Figure 5-3: Characteristics of the L1 and L2 (courtesy of the Aerospace Corporation)
[50]
Once the C/A code has been received and decorrelated, the navigation message contained therein speci es the satellite location and the correction necessary to apply to the spaceborne clock, the health of the satellite, the locations of the other satellites, and the necessary information to lock on to the P code. The military operators of the system have the capability to degrade the accuracy of the C/A code intentionally by desynchronizing the satellite clock, or by incorporating small errors in the broadcast ephemeris. This degradation is called Selective Availability, or SA and is intended to deny an adversary access to the high accuracy ( 10,10. The availability is poor at T1, dropping below
90% at a BER of only around 10,4 . This is a direct result of the fact that Cyberstar was not designed to support T1 rates on the uplink. The satellite antenna gain is some 4{5dB lower than that of Spaceway, leading to Eb =N0's that are just too small to support high-integrity interpretation of the symbols.
Cyberstar 2, covering North America, has capabilities that are considerably worse than Cyberstar 1, dropping below 95% at a BER of 10,9 for 1=4-T1 connections. This is due mostly to the poorer elevation angle statistics, as shown in Figure 6-3.
Cyberstar 3, covering Asia and the Paci c Rim, has similar capabilities to Cyberstar
1. It has marginally lower availabilities at high BER's due to the higher rain rates exhibited by this region, but has improved availability at high levels of Integrity (BER< 10,10) due to the higher average elevation angles (see Figure 6-3) that limits the free space loss.
Capability Characteristics: Spaceway The Capability characteristics of the rst four Spaceway satellites are shown in Figures 6-9 |6-12, for the same two symbol rates (1=4-T1 and T1) and numbers of users (2500 and 3000). Only the rst four satellites are shown, since Satellites 5{8 address the same market 178
Model:Cyberstar1. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Cyberstar1. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-6: The Capability Characteristics of Cyberstar1 in addressing the broadband
communications market in Western Europe
179
Model:Cyberstar2. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Cyberstar2. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-7: The Capability Characteristics of Cyberstar2 in addressing the broadband
communications market in North America
180
Model:Cyberstar3. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Cyberstar3. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-8: The Capability Characteristics of Cyberstar3 in addressing the broadband
communications market in the Paci c Rim
181
segments, providing additional capacity as the market develops. Their capabilities should therefore be the same as the rst four satellites. The capabilities for Spaceway are in general better than for Cyberstar. The extra gain of the satellite antenna means that the SNR is higher and the system can support higher integrities at higher availabilities. Summarizing the behavior across the four Spaceway satellite regions:
Spaceway 1 and 2 that address the North American and Western European markets can provide 2500 users with very high availabilities (over 98%) for both 1=4-T1 and T1 rates, over all BER's up to 10,15. For 3000 users there is a drop in the availability at T1 due to the queueing delays (the transponder capacity has been exceeded).
Spaceway 3, serving South America, has poor capabilities at T1 rates, due to the
high rain rate and the fact that the elevation angle statistics have a signi cant tail at low elevation angles (15% probability of being less than 30o ). The combination of these eects means that high levels of attenuation are likely and so the availability of service at high rates and high levels of integrity drops to below 90%. This behavior is somewhat arti cial, since the real system will feature dynamic power control for low elevation angles, and this has not been modeled here.
Spaceway 4, targeting the Paci c Rim, exhibits behavior similar to Spaceway 1 and 2. Despite a high rain rate, the angle of elevation varies over only a small range between 40o{50o . The attenuation is therefore limited, and the availability for 2500 users is over 97% for all BER's of interest.
Capability Characteristics: Celestri Finally, the Capability characteristics for the Celestri network are shown in Figure 6-13, for three dierent symbol rates up to 2.048Mbit/s, and for 2000 users and 3500 users per satellite. The most noticeable features of the characteristics for Celestri is the apparent insensitivity to both symbol rate and BER. Recall that Celestri is designed to cater to users demanding high symbol rates (around 2Mbit/s), and so there is a great deal of margin available for communication at lower rates. Notice however, that even at the low rates, the availability does not exceed 97%. This is entirely a result of the coverage statistics supported by Celestri. As shown in Figure 6-5 there is a nite (approximately 3%) probability that the highest satellite in view is still below 15o meaning that those ground locations lie outside of the antenna pattern of the satellite. Celestri therefore provides more even capabilities over the Earth with no penalties for providing high rates, but the maximum 182
Model:Spaceway1. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Spaceway1. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-9: The Capability Characteristics of Spaceway1 in addressing the broadband
communications market in North America
183
Model:Spaceway2. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Spaceway2. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-10: The Capability Characteristics of Spaceway2 in addressing the broadband
communications market in Western Europe
184
Model:Spaceway3. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Spaceway3. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-11: The Capability Characteristics of Spaceway3 in addressing the broadband
communications market in South America
185
Model:Spaceway4. Number of users =2500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Model:Spaceway4. Number of users =3000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9 -15 10
Rate=3.86E+05 Rate=1.544E+06
-10
-5
10
10
0
10
Integrity
Figure 6-12: The Capability Characteristics of Spaceway4 in addressing the broadband
communications market in the Paci c Rim
186
Model:Celestri. Number of users =2000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93
Rate=3.86E+005 Rate=1.544E+006
0.92 Rate=2.048E+006
0.91 0.9 −15 10
−10
−5
10
10
0
10
Integrity
Model:Celestri. Number of users =3500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 0.93
Rate=3.86E+005 Rate=1.544E+006
0.92 Rate=2.048E+006
0.91 0.9 −15 10
−10
−5
10
10
0
10
Integrity
Figure 6-13: The Capability Characteristics of the Celestri network in addressing the global broadband communications market (assumed average rain rate for ITU region D: Temperate)
187
availability is compromised somewhat. It would rst seem that a 3% loss of availability is insigni cant, but it must be realized that this represents almost 45 minutes per day when the system would not be available. The question as to whether this is acceptable can be answered only when a set of market-derived requirements are set. This is the subject of the next section.
6.2 Generalized Performance The performance of the broadband systems can only be measured relative to a set of requirements that specify the minimum acceptable levels for the quality of service provided to the users. The broadband systems being compared are marketed as high speed data connections for multimedia communications, and for these applications, T1 rates (1.544 Mbit/s) are considered acceptable and competitive with terrestrial services. It can be assumed that the broadband users will require at least 95% availability of service at these rates, with BER no greater than 10,9 . Unfortunately, the results of the previous section showed that Cyberstar cannot satisfy these requirements, due to a link budget that provides insucient Eb =N0 for these low BER's. The satellite antenna gain for the spot beams as speci ed in the FCC ling are suspiciously 4{5dB lower than those of Spaceway, which target similar geographical locations from the same orbital altitude. It is the authors belief that: (1) the satellite gains speci ed in the ling are unrepresentative of the real system; or, (2) Loral and Qualcomm (a likely partner in this venture) will use a sophisticated error correction scheme to achieve an additional 3-4 dB of margin. This is the approach that Celestri has taken, using advanced convolutional codes to give approximately 6 dB of coding gain. An assumption is therefore made that Cyberstar will make changes to the design speci ed in the FCC ling such that it can satisfy reasonable market requirements for broadband applications. This assumption at least allows us to proceed with a comparative analysis based on cost and performance.4 The capabilities of Spaceway3 (South America) are also calculated to be inadequate at T1 rates due to the low angle of elevation to some ground locations, but as stated earlier, this behavior may be corrected with dynamic power allocation. Recall that the performance measures the probability of being in an operational state, where \operational" is de ned to be a system state that complies with the system requirements. This is one area where there is a dierence between the GEO and LEO systems. Consider rst the GEO systems, in which there are basically two dierent types of failure 4 Specifying
requirements for a rate of 386Kbit/s would have obviated the need for this assumption. However, the market models that will be later used to calculate the CPF metric were constructed based on the number of T1 users
188
state; those that correspond to degraded payload operations that violate the requirements, and those that constitute a total loss of the satellite. As was discussed in Chapter 4, the level of payload redundancy (SSPA's etc.) on the GEO satellites is so high as to make degraded payload operations an unlikely failure mode; Spaceway, for example, has 64 transmitter ampli ers driving the 48 spot beams, and so a total of 16 transmitters can fail before even a single spot beam is lost. For the GEO systems then, the dominant failure mode is that which involves a total loss of the satellite operation. These satellite vehicle failures, SVF, can involve failures in the propulsion subsystems, the guidance and navigation subsystem (G&N), and the spacecraft control computer (SCC) etc. Modeling the spacecraft bus to include two parallel SCC's, two G&N's, and an integrated bus module representing propulsion, power and structural components, and using the failure rates given in SMAD [3], as discussed in Chapter 4, the probability of failure for a representative GEO broadband communication satellite is plotted as a function of time in Figure 6-14. Each of the dierent subsystems contribute to the overall probability of failure, but with the failure rates used, the system performance is dominated by the reliability of the integrated bus module. This results in a system failure probability that exceeds 50% after 10 years on orbit. For this analysis it can be assumed that each of the GEO satellites from either Spaceway or Cyberstar have similar reliability pro les, approximated by Figure 6-14. This sets the performance of the GEO systems for the broadband mission. Note that the satellite must be able to satisfy requirements when everything works, but beyond that, the performance is insensitive to the speci c requirement values, because the satellite either works or it doesn't. In this way, setting less stringent system requirements has no impact on the performance of the system. This behavior is in stark contrast to that of the LEO Celestri system. Here, the availability is directly related to the number of operational satellites, and to a certain degree, the Celestri constellation can suer some number of satellite failures without compromising the availability requirement. Eventually however, the coverage statistics of the degraded constellation is so bad that the availability drops below the minimum acceptable level. For the speci ed availability requirement of 95% at T1, 10,9 BER, this occurs after the constellation loses eight or more satellites, that is, all of its seven spares and any others. This is shown by the Capability characteristics for the degraded constellation, plotted in Figure 6-15. This result was calculated in the same way as the other characteristics, but with the elevation angle statistics of the 62 satellite constellation. The probability of losing a total of 8 satellites from the entire constellation is plotted in Figure 6-16. In creating this chart, it has been assumed that one out of every ten failures in the G&N, SCC, power or propulsion units results in a loss of the spacecraft. The failure 189
Failure state1 = (2*SCC) Failure state2 = (1*G&N,2*SCC) Failure state3 = (2*G&N) Failure state4 = (2*G&N,1*SCC) Failure state5 = (1*Bus) Failure state6 = (1*Bus,1*SCC) Failure state7 = (1*Bus,1*G&N) Failure state8 = (1*Bus,1*G&N,1*SCC) 0.7 Pf
0.6
Probability
0.5
0.4 FS5 0.3
0.2
0.1 FS6 FS7
FS1
0
0
1
2
3
FS2 FS3 FS4 4 5 6 7 Time (years)
FS8 8
9
10
Figure 6-14: Failure state probabilities for a typical (modeled) Ka-band GEO commu-
nication satellite
190
Model:celestri62. Number of users =2000 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 Rate=3.86E+005
0.93
Rate=1.544E+006
0.92 Rate=2.048E+006
0.91 0.9 −15 10
−10
−5
10
0
10
10
Integrity
Model:celestri62. Number of users =3500 1 0.99 0.98
Availability
0.97 0.96 0.95 0.94 Rate=3.86E+005
0.93
Rate=1.544E+006
0.92 Rate=2.048E+006
0.91 0.9 −15 10
−10
−5
10
10
0
10
Integrity
Figure 6-15: The Capability Characteristics of the degraded Celestri network after
losing all seven spares and any other satellite
191
rates, taken from SMAD [3], are SCC = 0:0246, G&N = 0:0136, power = 0:036, and prop=0.035, all per year. 1 8 or more SVF
Probability
0.8
0.6
0.4
0.2
0 2002
2003
2004
2005
2006
2007
2008
2009
2010
Year
Figure 6-16: Failure probability for the Celestri constellation, relative to a 95% avail-
ability requirement for T1 connections, 10,9 BER.
The probability of losing these 8 satellites and failing requirements is signi cant, and approaches unity after only a few years. After this time, regular replacements must be launched to maintain operations.
6.3 The CPF Metric: The Cost per Billable T1-Minute The cost per billable T1-minute is the CPF metric used in the analysis of broadband satellite systems. It is the cost per billable T1-minute that the company needs to recover from customers through monthly service fees, ground equipment sales, etc., in order to achieve a speci c (30%) internal rate of return. The cost per billable T1-minute can be calculated from an estimate of the system's market capture and the system costs. The market capture, or achievable capacity, depends on the size of the market accessible to the system and on the system capability characteristics.
6.3.1 Modeling the Broadband Market Recently a urry of studies have being conducted in order to quantify user behavior on the internet. Many corporations are interested in tapping into the sales potential that exists on the internet, thus studies range from internet usage to user demographics and purchasing patterns. Historical data exists on the trac that traversed the old National 192
Science Foundation (NSF) backbone between 1991 and 1995. Drawing from these studies and through independent research, market models for the broadband communications systems were constructed by Kelic, Shaw and Hastings [5] for a 1995 study of these same satellite systems. Three dierent market scenarios were developed to attempt to simulate the potential growth of the broadband market. A third-order growth model and an exponential growth model were developed by projecting the NSF data forward. The third order model is a very conservative estimate; the growth of internet commerce and the beginnings of internet telephony would suggest that the third order market is an unlikely scenario. The results presented in this chapter do not include the third order market model. The exponential model is considered an upper bound since the internet is still in its infancy and growth rates of technology are typically exponential in the early years and then begin to level o. Since the third order and exponential market models are based on projections of the growth of internet trac, they represent a volume of data symbols in the market. To obtain the market growth models for broadband users, the total volume in bits was divided by the connection speed for a typical user, that being T1 (1.544 Mbit/s). The nal market model is based on computer growth trends. This \last-mile" market is the dierence between the growth of computers worldwide and the growth of internet hosts. Providing the \last-mile" link from local providers to these unconnected computers is a potential market for satellite broadband data services. This market model predicts the number of broadband users directly. These three growth models are shown in Figure 6-17, in terms of the total number of simultaneous broadband T1 connections (1.544 Mbit/s). 1.0E+07 Number of simultaneous T1-connections
Exponential 1.0E+06
Last-mile Third-order
1.0E+05
1.0E+04
1.0E+03
1.0E+02 2000
2002
2004
2006
2008
Year
Figure 6-17: Broadband market growth models 193
2010
Data for each of the markets were globally distributed among countries based on either GDP or GDP per capita.5 Population density was used to distribute the market within countries. The result is a map of the predicted broadband market, discretized into 5o longitude/latitude cells, for each year from 2000|2012. An example distribution is shown in Figure 6-18. These market models are used to calculate the market capture of the modeled systems. 90
60
1000
30 100
0
−30 10 −60
−90 −180−150−120 −90 −60 −30
0
30 60 90 120 150 180
Figure 6-18: The last-mile market in 2005, GDP distribution
6.3.2 Calculating the market capture The limiting eects of market demographics, access and exhaustion can be quanti ed only with a simulation of the satellite system in a realistic market scenario. For an earlier study [5], a computer tool was developed that performs this function, simulating system operation and calculating the achievable capacity.6 The program propagates satellites and projects spot beams, with beamwidths corresponding to antenna gains, onto the Earth from the satellite positions. The beam patterns 5 The results presented in this chapter re ect only the GDP distributions,
since the previous study showed only minor dierences in the results for the two dierent distributions. 6 This program has been updated and modi ed to run under Matlab on any PC. It is a gui-clad complete simulation package including the calculations of link budgets, rain attenuation, cross-channel interference, and market access. The SkynetPro executable is publicly releasable, and may be obtained by contacting Prof David Miller, MIT Space Systems Lab,
[email protected]
194
Figure 6-19: Cyberstar's market capture map; exponential market model in 2005, GDP distribution:2400GMT for the systems were modeled using information given in the FCC lings. The market accessible to each beam is then calculated, using the market models described in the previous section. The realistically achievable capacity for each channel is the minimum of the supportable capacity of the beam, in terms of users, and the size of the market to which it has access. Example outputs from this program for Cyberstar and Celestri are shown in Figures 6-19 and 6-20 respectively. Referring rst to Figure 6-19, the image shows the projection of Cyberstar's 27 spot beams onto the Earth for the European, North American and Asian regional satellites, de ning the coverage regions used to calculate the achievable capacity. Notice that the beamwidths vary across the patterns, since some beams have higher gains to counteract the higher rain attenuation in those regions. The shading of these beams indicates the amount of market captured at a given instant (in this case 2400GMT), with lighter colors representing a larger number. Figure 6-20 is the corresponding result for the Celestri model, but at 1200GMT. Only the satellites over land masses are shown in this plot. Note that the spot beams are very small and numerous. Note that at this instant the system is well used in Europe where it is noon-time, but not in the United States, where it is still very early morning. The total market capture for a particular instant is the sum of the achievable capacities 195
Figure 6-20: Celestri's market capture map; exponential market model in 2005, GDP
distribution; 1200GMT
for all of the spot beams. This is calculated at several times over the day and then averaged to account for daily usage behavior. The simulation is performed for each year of the system lifetime for each market model to give the market capture pro les.
Market Capture: Cyberstar CyberStar was simulated for the dierent market scenarios over its expected lifetime. The years of the simulation ran from 2000 to 2012. The deployment strategy assumed for the simulations is the same as that outlined in the FCC ling: the North American satellite is deployed in 2000 and the European and then Asian satellites are launched in 2002 and 2003. The achievable system capacity, assuming this nominal deployment strategy, is shown as a function of time in Figure 6-21. The exponential and last mile market models result in similar achievable capacity pro les. Initially the system capacity is small over all the market models, with only the North American market being accessed. During this early period when the market is immature, the available market is generally small compared to the link capacity of the spot beams. The accessible market is therefore small, and even with only one satellite operational, the system is under-utilized. This of course implies that the system will bring in poor revenue during the early years, a fact only compounded by the large expenditures incurred during 196
0.0E+00 2000
1.0E+09
2.0E+09
3.0E+09
4.0E+09
5.0E+09
6.0E+09
2001
2002
2003
Exponential Last-mile
2004
2005 Year
2006
2007
2008
2009
2010
197
The achievable system capacity for the full deployment of Spaceway is shown as a function of time in Figure 6-22. Each of the three market models result in a dierent, smooth curve, with no discernible performance plateaus. This behavior is a direct result of the early deployment of the full Spaceway constellation. Once all satellite resources are on orbit by 2000, the achievable capacity closely follows the maturation curve of the markets, until system saturation occurs. The last mile market gives the largest capacity during the early years (1999-2003), and increases steadily toward saturation in 2008. The system capacity for the exponential
2000 - Europe (6), Asia (8), South America (7)
1999 - North America (1) and (5), Europe (2), Asia (4), South America (3)
Spaceway was simulated for the dierent market scenarios over its expected lifetime from 1999 to 2012. The deployment strategy assumed for the simulation is the same as that outlined in the FCC ling:
Market Capture: Spaceway
the beginning of the program. The last mile markets give the largest capacity during the early years. During the deployment period, the achievable capacity of the system increases rapidly for this last mile market scenario. By the time the total compliment of satellites has been launched in 2003, the achievable capacity of the system has begun to approach the saturated design capacity. The system has a slower increase in capacity under the exponential model, approaching saturation later, in 2009.
Figure 6-21: The market capture pro le for the Cyberstar system
T1-minutes per year
0.00E+00 1998
2.00E+09
4.00E+09
6.00E+09
8.00E+09
1.00E+10
1.20E+10
2000
2002
Last-mile Exponential
2004 Year
2006
2008
2010
2004
2005
Last-mile
Exponential
2006 Year
2007
2008
2009
2010
198
Neither of the capacity pro les show saturation. This means that over the entire lifetime
Figure 6-23: The market capture pro le for the Celestri system; both market models; GDP distribution
2003
1.0E+10
2.0E+10
3.0E+10
4.0E+10
5.0E+10
6.0E+10
7.0E+10
8.0E+10
The Celestri system was simulated assuming the deployment schedule in the FCC ling, giving an IOC in 2003. The achievable market capture pro les for both market models are shown in Figure 6-23.
Market Capture: Celestri
market increases in a correspondingly exponential way through the middle period of the simulation (2002-2005), and reaches saturation around 2007.
Figure 6-22: The market capture pro le for the Spaceway system
T1-minutes per year T1-minutes per year
of the system, Celestri has a larger link capacity than the global market can support. The trends shown in the gure are a direct consequence of this. The last mile market gives the largest achievable capacity until 2005. The capacity for the exponential market is the largest after 2005. It is interesting to compare these trends with the trends of the actual market growth models, shown in Figure 6-17. After accounting for the dierent scales (T1-connections or T1-minutes per year) the graphs are almost identical in shape and are very close in magnitude. This means that Celestri basically swallows most of the market available, over the entire globe. The implication of these trends is that at least for the early years of the broadband market, Celestri is over-designed. In actuality, this gives Celestri a lot of headroom to compliment revenue with the telephony market. It is a fact that the largest market for satellite telephony lies in the same underdeveloped regions of the world in which Celestri has spare link capacity. This is an ecient use of the available resources and should give a large potential revenue.
Market Capture by Each Satellite For additional insight, and to assist in the calculation of failure compensation in the next section, these market capture pro les can be broken down into that of each individual satellite. These are shown in Figures 6-24{6-26. 3500
Simultaneous T1-connections
3000
2500
2000 Sat1: Europe Sat2: Noth America Sat3: Asia
1500
1000
500 2000
2002
2004
2006 Year
2008
2010
2012
Figure 6-24: The market capture pro les of the Cyberstar satellites; exponential market;
GDP distribution
For example purposes, consider the Spaceway satellites shown in Figure 6-25. After 2005, both of the USA and one of European satellites saturates at around 2800 simultaneous users. If additional users were addressed, the supported availability would drop below requirements, as seen in the Capability characteristics of Figure 6-9. However, in the same 199
0.00E+00 2000
5.00E+02
1.00E+03
1.50E+03
2.00E+03
2.50E+03
3.00E+03
2002
2004
2006 Year
2008
2010
Sat1: USA Sat2: Europe Sat3: S.America Sat4: Pacific Rim Sat5: USA Sat6: Europe/Africa Sat7: S. America/USA Sat8: Pacific Rim 2012
200
The calculations are all performed in scal year 1996 dollars (FY$96) since this represents the project inception date for at least Spaceway and Cyberstar. All costs are adjusted using the Oce of the Secretary of Defense estimates [3], and discounted back to a present value in 1996 with a 30% discount rate. The total baseline cost of each satellite system is estimated including recurring and non-recurring costs for development, construction, launch, insurance, gateways and control center operations, and terrestrial internet connections. The cost model used for this example is the same as that used by Kelic [5], drawing on industry experience and observed trends. The Theoretical First Unit (TFU) cost for communication satellites can be estimated reasonably well assuming $77,000 per kg of dry mass. The non-recurring development costs for commercial systems can be approximated at three to six times the TFU cost, depending on the heritage of the design. Launch costs to GEO can be assumed at $29,000 per kg, with insurance at 20%. Celestri can expect launch costs around $10,000 per kg to LEO, with the same 20% insurance. For linking to the terrestrial network, each OC-3 (155 Mbit/s)
6.3.3 System cost
year the South American satellite has a very small market capture due to the immaturity of the market there. This would suggest that some resources have been allocated unwisely. Satellite resources are being wasted over South America where they are under-utilized, while the markets in the USA and Europe could support an increased service. A decision made to reallocate resources would surely result in an increased system capacity if more spectrum could be made available.
GDP distribution
Figure 6-25: The market capture pro le for the Spaceway satellites; exponential market;
Number of simultaneous users
2000
0
500
1000
1500
2000
2500
3000
3500
2002
2004 Year
Average Celestri SV
2006
2008
2010
201
The system lifetime costs and the total market capture is used to calculate the CPF metric. The Cost per billable T1-minute for each of the systems, across the exponential and last-mile market scenarios, are shown in Figure 6-27.
6.3.4 Cost per Billable T1-Minute Results
connection costs $8,500 installation and $7,900 per month. This cost scales with the market capture. The expected failure compensation costs are calculated from the satellite failure probability pro les and the market capture curves. For the GEO systems, a satellite failure can be assumed to result in the loss of a single years' revenue, together with the cost of building and launching a replacement satellite. The calculation of the opportunity costs from lost revenue requires an assumption for the average service charge per user. A conservative estimate of $0.05 per T1 connection is used for this example. For the LEO Celestri system, there are no opportunity costs and replacements are made only after 8 satellites are lost, but must then continue throughout the system lifetime to maintain a constellation of at least 63 satellites. The baseline system cost and the failure compensation costs can be summed to give cL , the system cost pro le. The baseline costs cs (t), failure compensation costs vf (t), and total system costs cL (t) are shown for each system (for an exponential market) in Tables 6.4{6.6. Discounting the system cost pro les at 30% per year gives the net present value of the costs in scal year 1996 dollars. Summing over all years of the discounted pro le gives the total lifetime cost, CL . These are given below in Table 6.7.
market; GDP distribution
Figure 6-26: The market capture pro le for a typical Celestri satellite; exponential
Simultaneous T1-connections
Table 6.4: System cost pro le for Cyberstar; constant year FY96$ Year cs ($M) vf ($M) 1996 81.96 0.00 1997 329.39 0.00 1998 377.25 0.00 1999 192.23 0.00 2000 262.22 1.86 2001 55.53 13.46 2002 185.33 17.10 2003 188.40 34.03 2004 33.94 45.14 2005 34.24 43.23 2006 34.31 40.88 2007 34.30 38.38 2008 34.30 36.18 2009 34.30 22.53 2010 34.30 10.49
cL 81.96 329.39 377.25 192.23 264.08 69.00 202.43 222.43 79.08 77.47 75.18 72.68 70.48 56.83 44.79
This is perhaps the most important chart in this chapter, and is deserving of some discussion. As can be seen, there is only a small dierence in absolute terms in the cost per billable T1-minute across the systems, varying by at most 10 cents. This is characteristic of the high xed costs that dominate these ventures. Summarizing the trends in the chart:
Spaceway shows a large variation in the CPF between the market scenarios, with
the exponential market giving high values for the CPF, due to the early deployment schedule of the system. Before 2005, the exponential market is immature and so the system can achieve only a low market capture to oset the high net value of the costs incurred before IOC. The last-mile market is more developed in the early years, leading to a higher utilization of the system and consequently a considerably lower CPF. Spaceway is therefore very sensitive to how the market develops, and should revise their deployment schedule to match the future predictions of the market as it develops.
Cyberstar shows a smaller variation in the CPF across markets, but has a higher
average value. The small variation is due mostly to the delayed deployment of space assets, but is also a result of the fact that Cyberstar saturates very quickly under almost any reasonable market scenario. The system is relatively modest compared to the other systems, and does not need to capture a large market share to fully saturate its transponders. Of course, the smaller capacity means that the system cannot amortize the high xed costs over as many users, and so the average CPF is higher, perhaps leading to smaller pro t margins in a competitive environment. Essentially the modest size of Cyberstar makes the venture a little less risky, but comes at the cost of smaller returns. 202
Table 6.5: System cost pro le for Spaceway; constant year FY96$ Year cs ($M) vf ($M) 1996 286.78 0.00 1997 817.88 0.00 1998 650.44 0.00 1999 636.18 2.19 2000 324.11 91.96 2001 36.02 143.79 2002 39.56 140.76 2003 43.29 138.54 2004 47.40 136.88 2005 50.85 135.26 2006 52.30 130.14 2007 52.87 122.74 2008 53.12 72.86 2009 53.10 45.27 2010 53.20 21.04
cL 286.78 817.88 650.44 638.38 416.07 179.80 180.32 181.83 184.27 186.11 182.44 175.61 125.98 98.36 74.25
Table 6.6: System cost pro le for Celestri; constant year FY96$ Year 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
cs ($M) vf ($M) 593.33 0.00 2272.57 0.00 3024.83 0.00 3505.16 0.00 55.02 351.49 67.83 1114.44 86.66 691.72 107.22 591.96 127.19 536.74 145.75 487.26 161.43 442.39 175.39 0.00
cL 593.33 2272.57 3024.83 3505.16 406.51 1182.27 778.38 699.19 663.93 633.01 603.82 175.39
Celestri achieves lower CPF's and smaller variations than either Cyberstar or Space-
way. This double bene t comes as a result of a late deployment, allowing the market to develop before expending costly assets, and an immediately massive market capture to quickly oset the xed costs. This is the ideal strategy, provided the system is able to capture users from the other systems that are already in place.
An interesting conclusion drawn from these trends is that architectural dierences are not as signi cant as either the deployment strategy or the overall market capture. Cyberstar, Table 6.7: Lifetime costs CL for the modeled systems (net present value in FY96$) Cyberstar $0.667 Billion Spaceway $1.48 Billion Celestri $2.33 Billion
203
Cyberstar $0.24 $0.20
Spaceway $0.24 $0.15
Exponential Last-mile
$0.10
$0.12
$0.14
$0.16
$0.18
$0.20
$0.22
$0.24
$0.26
$0.16
$0.15
Celestri
Last-mile
Exponential
204
For the broadband communication systems, changes in the system requirements correspond to dierent service options that can be provided to the users. The impact of these changes on the likely cost per billable minute can be measured with the requirement elasticities. As de ned in Chapter 4, the requirement elasticities of the CPF at a given design point are,
6.4.1 The Requirement Elasticities
Type 1 adaptabilities represent the elasticity of the CPF metric with respect to changes in the requirements or the component technologies.
6.4 Type 1 Adaptability Metrics
with the same GEO architecture as Spaceway but with a delayed deployment, is less sensitive to market variations. The high throughput of Spaceway and Celestri result in lower average CPF values. Based on the Cost per billable T1-minute metric, all of the systems studied have the potential to be competitive, with Celestri having a slight advantage. Although the results are not presented here, the simulation tool is capable of modeling a competitive environment in which systems compete for the same market. The earlier study[5] showed that at least two of these systems could co-exist and still obtain a 30% internal rate of return under these situations.
Celestri
Figure 6-27: The Cost per billable T1-minute metric for Cyberstar, Spaceway and
Cost per billable T1-minute
Isolation Elasticity, EIs = CPF=CPF Is=Is Rate Elasticity, ER = CPF=CPF R=R CPF =CPF Integrity Elasticity, EI = I=I Availability Elasticity, EAv = CPF=CPF Av=Av
(6.1) (6.2) (6.3) (6.4)
where Is, R, I , and Av are the set of system requirements on isolation, rate, integrity and availability. For communication systems, altering the isolation requirement simply changes the multiple access speci cations. For systems limited by self-interference, such as CDMA systems, this may result in changes to the CPF, but has no eect for the systems considered here. Changing the rate provided to users is obviously a design alternative that aects the CPF. Conversely, there is no real bene t in oering improved Integrity, and the very nature of multimedia application prohibits BER's higher than about 10,7 . Changing the integrity is therefore not an option. Finally, assessing the impact of changing the availability requirement is valuable, especially for Celestri which suers from availability problems in the event of satellite failures. The corresponding elasticities are discussed in the following sections.
Rate Elasticity of the CPF One option open to the designers of the broadband systems is to lower the standard data rate provided to the users. This could be expected to improve the supportable integrity and, more importantly, allow more users to be served. Since broadband users essentially just require connection services at \broadband" rates, it can be assumed that they will still purchase services at rates marginally lower than T1 if the price is right. Provided that the increase in the number of users served is more than enough to compensate for the reduction in the price charged per user, the net revenue of the system will be increased. It is valuable therefore to consider the impact on the cost per user (CPF) of lowering the rates to 1=4-T1, as measured by ER , the Rate Elasticity of the CPF. Calculating ER involves repeating all the analysis that led to the cost per billable T1minute metric, but with the rate changed to 386 Kbit/s. The cost per billable 1=4-T1-minute then can be compared directly with the cost per T1-minute to calculate ER . The dierence in value (CPF) represents the dierence in cost that must be charged to each broadband user if the data rate provided to them is changed. The largest change in the calculation of the cost per billable 1=4-T1-minute is in the 205
estimation of the market capture. Strictly, to estimate the number of 1=4-T1-minutes captured by a system, a market model for the number and distribution of 1=4-T1 users is required. However, the market for 1=4-T1 users can be assumed to be the same as the market for T1 users, since the notion of a user is in this case a human consumer. Ignoring the eects of elasticity of demand (lower rates may deter consumers from purchasing service), the total number of consumers in the marketplace should not change signi cantly by changing the rate oered to them. The same simulations can therefore be used to evaluate the market capture of 1=4-T1 users. The most important features of these simulations compared to those for T1 users are: Early in the lifetime, there can be no increases in the number of 1=4-T1 users served compared to T1 users, since the systems are market limited.
The saturation point occurs later, since more users can be addressed at lower rates. In general, the total market capture of 1=4-T1 users over the lifetime is greater than,
but not 4 times greater than, the market capture of T1-users. The baseline system costs are the same, but the failure compensation costs must re ect the fact that the requirements for quality of service have changed, resulting in a lower probability of failing system requirements due to degraded operations. Having accounted for all these issues to calculate the new CPF, the ER can be formulated. The resulting ER for Spaceway, Cyberstar and Celestri are shown in Figure 6-28. 1.00 Exponential Last-mile
Rate-Elasticity of CPF
0.95 0.90 0.85 0.80 0.75 0.70
Spaceway
Cyberstar
Celestri
Exponential
0.89
0.99
0.88
Last-mile
0.87
1.00
0.72
Figure 6-28: The rate elasticity of the CPF for Cyberstar, Spaceway and Celestri The results shown in this chart must be interpreted carefully. An ER = 1 indicates that the reduction in the cost per user exactly compensates for the increased number of 206
users served. Referring to the chart, this means that Cyberstar can charge 1=4-T1 users a quarter of the price charged to T1 users, and obtain the same revenue. Equivalently, if they charged more than this, perhaps only half the price, reducing the rate of service results in a doubling of their revenue stream. This works for Cyberstar because the transponders are almost always saturated, even for low rate users. This is not the case for Celestri and Spaceway. These systems have ER < 1, and so must charge each of the 1=4-T1 users more than a quarter of the price of a T1 user if the revenue stream is to be preserved (note that if ER 0, the cost per user is not eected by rate, and lower-rate service cannot be oered at a discount price). Therefore, the smaller size of the Cyberstar system presents a greater opportunity for marketing the system at lower rates. Celestri (and to a lesser extent Spaceway) must hope that the market demand for broadband services is rate elastic, meaning that they will be able to attract more high-price users at the higher rates to increase their revenue stream.
Availability Elasticity of the CPF Consider the impact of lowering the availability requirement to 92%. This may be an option if the systems are sold as bulk-data transfer systems that don't need to provide continuously available real-time access. The GEO systems are unaected by this reduction in the availability requirement, since the dominant failure mode is a total loss of the spacecraft. Celestri however could operate with fewer satellites if the availability requirement were lowered. This should improve the CPF. In fact, the availability elasticity of CPF for Celestri has been calculated to be 0.174. This is a surprisingly low value, much lower than the rate elasticities calculated in the previous section. Taken in context with the GEO systems which have CPF's largely independent of availability, this result implies that broadband communication satellite systems are reasonably insensitive to the availability requirement.
6.4.2 The Technology Elasticities The technology elasticities can be de ned for any particular component of the system that may have an impact of the overall performance or cost. This allows a quanti able assessment of design decisions and can identify the most important technology drivers. For communication systems, the program components that seem to dominate cost are launch, manufacture and reliability. The corresponding elasticities are,
=CPF Manufacture Cost Elasticity, ECmfr = CPF Cmfr=Cmfr 207
(6.5)
=CPF Launch Cost Elasticity, EClaunch = CPF Claunch =Claunch =CPF Failure Rate Elasticity, Es = CPF = s s
(6.6) (6.7)
where Claunch is the budgeted launch cost for the system, Cmfr is the manufacturing cost, and s is the (average) satellite failure rate. These elasticities are calculated simply by re-costing the system, including the expected failure compensation costs, after changing (reducing) the relevant variable by a small amount, say 20%. The resulting elasticities for each of these technologies have been calculated for Spaceway, Cyberstar and Celestri are are shown in Figures 6-29{6-31. Manufacture Cost Elasticity of CPF
0.75
0.70
0.65
0.60
Exponential Last-mile
0.55
0.50
Spaceway
Cyberstar
Celestri
Exponential
0.60
0.71
0.62
Last-mile
0.60
0.71
0.74
Figure 6-29: The manufacture cost elasticity of the CPF for Cyberstar, Spaceway and
Celestri
The most important features of these charts are summarized:
For all systems, manufacturing cost savings are the most important to the CPF, with ECmfr 0.6{0.75. This is almost twice as large as the sensitivity to savings in the launch cost, in which EClaunch 0.2{0.45. The eect of improving failure rate is
relatively insigni cant, having an elasticity less than 0.1 for all systems. The reason for the relative importance of manufacture costs is that they are incurred at the very start of the program. The time value of money biases these up-front costs as the most signi cant components to the system lifetime cost.
Of the three systems, Spaceway is the least sensitive to manufacturing costs. This
is because Spaceway, with eight satellites, realizes a larger bene t from production learning than Cyberstar, without having to build 70 satellites like Celestri. 208
Cyberstar 0.21 0.22
Spaceway 0.46 0.45
Exponential Last-mile
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.37
0.37
Celestri
Exponential Last-mile
209
This chapter has described a detailed comparative analysis of three proposed broadband communication satellite systems using the GINA methodology. Models were constructed for Cyberstar, Spaceway and Celestri based on the designs listed in their FCC lings. Using
6.5 Summary
elasticities, but there are some important trends. Spaceway is unexpectedly the most sensitive to failure rates. The reason is again the deployment schedule, since the Spaceway satellites are on orbit earlier and so are likely to fail earlier, giving higher expected replacement costs. Any improvements to the failure rate can reduce the likelihood of these expenditures. Celestri has a higher failure rate elasticity than Cyberstar because the sheer number of satellites mean that failures are very likely.
The failure sensitivity is almost insigni cant compared to the other two technology
for Cyberstar (0.21), even though both systems involve launching large satellites to GEO. The reason is the dierence in deployment schedule. Cyberstar has a delayed launch manifest, and as soon as the satellites are launched, they become almost fully utilized. Spaceway, on the other hand spends a great deal of money launching satellites early in the project life, when money has an increased value, even though these satellites generate little revenue for 4 or 5 years. Celestri is moderately sensitive to launch costs simply because they have to loft at least 70 satellites plus replacements to orbit.
The launch cost elasticity for Spaceway (EClaunch = 0:46) is much higher than the value
Figure 6-30: The launch cost elasticity of the CPF for Cyberstar, Spaceway and Celestri
Launch Cost Elasticity of CPF
Cyberstar 0.030 0.033
Spaceway 0.077 0.083
Exponential Last-mile
0.000
0.020
0.040
0.060
0.080
0.100
0.045
0.045
Celestri
Exponential Last-mile
210
these models, the Capability characteristics for each system were calculated. The results suggest that Cyberstar, as it appears in the ling, is unsuited for providing broadband communications at rates higher than 386Kbit/s. Both Spaceway and Celestri are able to support high rate (T1) services with high levels of integrity (BER 10,10) and availabilities exceeding 97%. The cost per billable T1-minute metric is used to compare the potential for commercial success of each system. It is the cost per billable T1-minute that the company must recover from customers through fees in order to achieve a 30% internal rate of return. It was assumed that improvements are made to Cyberstar in order for it to be able to compete in this market. The calculations of the Cost per billable T1-minute involved the development of several market models based on current internet and computer sales growth trends. Simulations of the operations of each of the systems within the realistic market scenarios, accounting for the eects of market penetration, access and exhaustion were carried out to evaluate the market capture pro les. The system lifetime costs were estimated including the contributions from satellite construction and development, launch, insurance, gateways, internet connection hardware, gateway and control center operations, and expected failure compensation costs. The resulting cost per billable T1-minute metrics showed that all three systems will be able to oer competitively priced services to users. Celestri achieved the lowest cost per billable T1-minute, and had the smallest variation across market models. An important conclusion to come from these results is that deployment is more important than architecture for this market. The dierences between architectures (GEO versus LEO) do not impact the cost per billable T1-minute as much as the eective deployment strategies
Figure 6-31: The failure rate elasticity of the CPF for Cyberstar, Spaceway and Celestri
Failure Rate Elasticity of CPF
or the overall market capture. A well designed deployment strategy, tailored to match the predicted market growth is less sensitive to variations in that market, while a large overall throughput allows the high xed costs to be amortized over more users. The Type 1 adaptability metrics were calculated, and basically emphasized the importance of a sensible deployment strategy. Contrary to popular belief, achieving lower launch costs is not as eective for commercial bene t as lowering the cost of the manufacture process. The driving requirement for the broadband systems is data rate, and smaller systems oer the potential for discounted service at lower rates that may realize higher revenues through increased yield. These results are clearly signi cant, and can be readily applied to an economic analysis of the systems. This is a very nice feature of the objective, quantitative nature of the analysis methodology. By judging the systems only on how well they address a de ned market, and by scaling their cost accordingly, the GINA methodology has enormous utility for comparative analysis. It now remains to demonstrate the usefulness of GINA for the design process.
211
212
Chapter 7
Techsat21; A Distributed Space Based Radar for Ground Moving Target Indication The Air Force has traditionally built and operated very large and complex satellites such as MilStar, the Defense Support Program, and the Defense Metereological Satellite Program. Recently though, the Air Force has recognized the potential for low cost and improved capabilities that may be possible with distributed systems of small satellites. In the New World Vista's report [56], published in 1996, the Air Force Scienti c Advisory Board Space Technology Panel identi ed the development and implementation of systems featuring formation ying satellites that create sparse apertures for remote sensing and communications as an important goal for the Air Force in the next century. To this end, the Air Force Research Laboratory (AFRL) has initiated the Techsat21 program, an innovative concept for performing the Ground Moving Target Indication (GMTI) radar mission using a distributed satellite system. The key to the concept is a cluster of microsatellites (less than 100kg) that orbit in close proximity, each with a receiver to detect coherently not only the return from its own transmitter, but the bistatic responses received from the orthogonal transmit signals of the other satellites as well [48]. This provides the opportunity to collect many independently sampled radar signals, each with dierent phase, that can be used to form a large, post-processed sparse coherent array with a very high resolution and large main-lobe gain. This chapter describes some of the work that has been carried out at MIT in collaboration with the AFRL to assist in the conceptual design phase of this project. In particular, Sections 7.1 and 7.2 introduce the most important principles of space based radar, Section 7.3 describes the Techsat21 concept, and Sections 7.4 and 7.5 present a generalized analysis and some design trades for the system. This demonstrates the application 213
of the GINA methodology for a real design study.
7.1 Space Based Radar There are clear advantages to placing radars in space for the purpose of surveillance. since the eld of view is increased allowing large areas to be searched. The major disadvantages are the increases in range leading to large signal attenuation from free space loss, and more importantly, the fact that the large area of the Earth illuminated produces severe clutter conditions. In the Space Based Radar Handbook [57], Andrews states that, . . . the fundamental design considerations for a space-based radar (SBR) designed for air or surface search are: (1) the radar must have enough poweraperture product to detect the radar cross section of the targets of interest at the search rate required for the application; (2) the radar must have enough angular and range resolution to locate the target with the required accuracy; and (3) the radar must be capable of rejecting clutter returns from the earth and interference from other electromagnetic transmissions to detect targets in the presence of these usually much larger unwanted signals. Note that these three criteria describe precisely the same quality-of-service parameters de ned by the GINA methodology. The probability of detecting targets is equivalent to the Integrity since missed detections or false alarms constitute errors in the interpretation of signals, the search rate is obviously the generalized rate measure, and the ability to locate the target and reject unwanted clutter signals de nes the Isolation characteristics of the radar. A space-based radar therefore bene ts from a built-in capability for searching large areas very quickly (rate), but faces characteristic problems with detectability (Integrity) and clutter rejection (Isolation). These issues are discussed in the following section.
7.2 Detecting Moving Targets in Strong Clutter Backgrounds A detailed treatment of radar detection in the presence of noise and clutter is not necessary for a basic appreciation of the issues and problems that face SBR systems. An engineering discussion of the essential concepts and mathematics of the detection process will suce. Nevertheless, if readers wish to gain a further understanding, there are many excellent texts that describe radar detection in great detail; Skolnik [58] is considered the classic text, but some very readable alternatives are Blake [59], Barton [39] and Tzannes [60], which is 214
interesting not only because of the author's relaxed writing style but also for its generalized approach to radar and communication systems (does this seem familiar?). Proceeding then with the engineering discussion, all radar systems emit electromagnetic (EM) radiation and extract information from the re ected energy. Sometimes, the absence of re ection is used to characterize the medium through which the electromagnetic wave propagated, but the vast majority of radars analyze the re ected signal. There are four aspects to the information in the re ected, received signal: presence of a re ector, the EM wave's travel time to the re ector, frequency content of the received signal, and polarization of the re ection. Polarization is seldom used since antennas are typically designed for only one polarization. The presence of a re ected signal is used to detect the presence of targets. The time delay between the transmitted and received signal gives the range to the target if the speed of the EM wave is known for the medium. Finally, the spectrum of the received signal can be used to indicate the velocity of the target relative to the radar by the phenomenon of Doppler shift. Every radar uses these aspects of information in the received signal in dierent ways, depending upon the function and role of the radar. For example, an air trac control radar relies primarily upon the detection and ranging aspects to nd and locate air trac. A moving target indicator (MTI) radar relies heavily on the signal spectrum to separate moving targets from stationary clutter. MTI radar also uses range information to locate target position. A synthetic aperture radar (SAR) for imaging uses signal time delay to resolve objects in range and frequency information to resolve objects in cross-range. A range/cross-range image is constructed based on the presence and strength of re ection. For the GMTI mission, the presence of a return and its frequency spectrum are used to detect the moving targets in a strong clutter background. The target velocity is a direct output of this process. For most MTI radars the target's position is estimated in terms of range from the antenna by time-gating the received signal to form range bins, and can be located in azimuth only to within the width of the radar beam.
7.2.1 Locating the Target The acronym \radar" stands for RAdio Direction And Ranging, and of course the eventual goal of all surveillance radars is to locate targets in range and direction. In the simplest terrestrial or airborne radars, the radar scans through an arc in the horizontal plane, and the direction measurement comes from knowing where the radar is pointed when the echo is received. In this way, the con dence in locating the target in the azimuth direction is limited to the beamwidth of the antenna. The range to the target is linearly related to the time between transmitting a pulse and receiving the echo. The range resolution is related 215
to the radar pulse duration . Consider a pair of targets with a separation along the line of sight of exactly half a pulse length. The echos from these targets will overlap in time, and the return will be the superposition of the re ection from each of the targets. As a result the targets cannot be unambiguously isolated, and so the range resolution of the radar is c=2, where c is the speed of light in the medium. For a radar that transmits a sequence of pulses, there also exist range ambiguities. An echo received at the radar could be the pulse that was just transmitted after being re ected from a nearby target, or alternatively it could be a pulse transmitted earlier, having been re ected from a more distant target. In fact, the radar has no way of knowing which of its emitted pulses caused the echo, and can therefore not know what from range it was re ected. These range ambiguities are related to the pulse repetition frequency PRF, and are separated by a distance Ramb = c=(2PRF ). The situation for a space-based radar is essentially the same, with a few complications, as shown in Figure 7-1. The location in azimuth AZ of a non-stationary target is still determined from the angle at which the beam is pointed, and for monostatic (single antenna) radars the resolution of this measurement is equal to the azimuthal beamwidth, AZ . As a consequence of the increased range, this azimuthal uncertainty translates into a very large position uncertainty on the ground (or in the air for airborne targets). This is obviously a disadvantage of space-based radar systems. As in all radars, a target's range from the space-based radar is measured by a time of
ight calculation, but now the line of sight is not horizontal. Referring to Figure 7-1, the radar beam illuminates an elliptical spot on the ground (distorted by the curvature of the Earth) and horizontal distances across this spot are related to the range along the line of sight by a sec( ) multiplier, where is the grazing angle between the line of sight and the local surface. Note that in the communications community, this angle is called the elevation angle. However, in radar circles, the term \elevation angle" refers to the angle above the nadir direction that the radar beam is pointed. Within this chapter, the conventions of radar will be used consistently. The range resolution on the ground is therefore c=(2 cos ), and for a at Earth approximation this is c=(2 sin EL). The \range bins" on the ground are therefore of this width and aligned perpendicular to the line of sight. The same geometrical factor is applied to the range ambiguities, so that they are now separated by c=(2PRF cos ). Usually, the space-based radar designer would like the radar to be range unambiguous, meaning that there are no range ambiguities within the projected radar footprint. The length of this footprint depends on the satellite altitude, the beamwidth 216
h
EL
Range bins
Rs
ψ
vp
(cτ /2) secΨ
AZ
Rs θAZ
Range bins are shown greatly exaggerated
Figure 7-1: Space-based radar geometry
of the radar, and the elevation angle EL such that, using a at Earth approximation,
EL) Lfoot = hcossin( 2(EL)
(7.1)
(7.2)
where EL is the beamwidth in the elevation direction. This eectively places a constraint relationship on the maximum allowable PRF for a given beamwidth, altitude and elevation angle,
EL
c cos(EL) PRFmax = 2h sin(theta ) tan(EL)
This turns out to be a very crippling constraint for space-based radar for moving target indication, since it limits the PRF to reasonably low values (1000's of Hz) for any reasonably sized antenna at useful radar frequencies. A PRF higher than this results in range ambiguities across the footprint. The number of ambiguities is simply the ratio between the chosen PRF and PRFmax calculated above. Higher PRF's are desirable for detecting moving targets in strong clutter backgrounds, as shall be shown in later sections. 217
7.2.2 The Radar Range Equation The radar range equation relates the maximum range at which targets can be detected to the transmitter power, antenna gain and area, signal to noise ratio, signal integration, system losses, thermal noise, and target radar cross section. The signal power to noise power ratio (SNR) is an important factor in the detection of valid radar returns in the presence of noise. The most general form of the radar range equation is [58], e T (7.3) SNR = Pt GA 2 (4 ) R4s Ls where Pt is the transmitter power, G is the antenna gain, Ae is the eective aperture area of the antenna, and Ls are system losses. The radar cross-section (RCS or T ) of a target is the eective area that re ects power
back to the radar receiver. Typical values for the average RCS of common targets range from 10m2 for small boat, to 200m2 for a pickup truck [57]. In actuality, the RCS of a target depends strongly on the wavelength and on the azimuth and grazing angle perspective.
7.2.3 Detecting the Target To best understand the radar detection problem, it will be posed in a very simple mathematical context. The radar receives a return signal r(t) and a decision must be made as to whether, r(t) = N (t) (no echo present) (7.4) or
r(t) = N (t) + s(t)
(echo present)
(7.5)
where N (t) is noise and interference, and s(t) is the target return. This is of course equivalent to the general detection problem discussed in Chapter 4. In that chapter, it was also stated that the best way to process (prior to detection) a signal corrupted by noise and interference was using a matched lter, since it rejects everything except the signal-plus-noise components that exist in the same signal subspace as s(t). In less mathematical terms, this just means that the lter only lets through the signal-plus-noise components that \look like" the expected target return. For radar this involves correlating the return with replicas of the transmitted signal, since the return should look like what was transmitted, only delayed, attenuated, frequency-shifted and phase-distorted. Since the noise and interference N (t) is in general a random variable, the output of the matched lter will also be a random variable. The detection process then involves making a decision based on samples of a random variable measured from the output of the matched lter. 218
7.2.4 Noise-Limited Detection If the system can be assumed to be noise-limited (with little or no clutter or interference) N (t) will be zero-mean Gaussian white noise. The decision process then simply reduces to determining whether the measured samples came from a zero mean Gaussian pdf, or from a Gaussian pdf with a non-zero mean equal to the energy S of the desired signal s(t). If we deduce that the mean is zero, then we have decided there is no target echo; otherwise we have decided that there is a target. Errors can occur by declaring a false alarm, or worse, by missing a detection. The probabilities of these errors is a function of the decision rule, which in turn depends on the type of detector. A basic incoherent detector (peak detector) measures the complex amplitude of the received signal to determine if it exceeds a predetermined threshold. As discussed in Chapter 4, a positive radar detection is declared if the envelope of the received signal exceeds the threshold voltage vT . The peak detector is therefore actually two detectors: an envelope detector to measure the envelope of the signal, and a threshold detector to make the decision. The probability of a false alarm for each decision made by the threshold detector is given by, Pr(false alarm) = Pfa =
1
Z
vT
g0 (x) dx
(7.6)
where g0 is the probability density function of the noise entering the threshold detector. Similarly, the probability of detection for the incoherent peak detector is,
PD =
Z
1 vT
g1(x)dx
(7.7)
where g1(x) is the probability density function of the signal-plus-noise envelope at the input to the threshold detector in the case when an echo is present. The form of g1 (x) depends on the nature of the transmitted radar signal and on the signal processing performed ahead of the detector.
Single pulse detection For a decision based on a single measurement, no further processing is done between the envelope detector and the threshold detector. In this case, g0 (x) and g1(x) are the same as the pdf's at the output of the envelope detector. If the Gaussian noise N (t) entering the envelope detector has variance 2 , the probability density function of the noise at the output of envelope detector g0 (x) has a Rayleigh distribution [39], 219
2 g0(x) = x2 exp ,2x2
!
(7.8)
and so the probability of false alarm for a each decision process is Pr(false alarm) = Pfa =
Z
1 vT
2 g0 (x) dx = exp ,2vT2
!
(7.9)
If the matched lter output is compared continuously to the threshold, independent samples of noise at a rate of Bn will give an average false-alarm rate FAR = Bn Pfa , where Bn is the noise bandwidth. This false alarm rate is an important measure of the integrity of radar systems, and is often stated as its inverse, the false alarm time. Considering now the signal component, if the signal, after downconversion to an intermediate frequency (IF), is a sinusoid with a peak amplitude A, such that s(t) = A cos(2fIF), the output envelope of the signal-plus-noise will have a Rician distribution [39], "
,
#
2 2 g1(x) = x2 exp , x2+2 A I0 Ax= 2
(7.10)
where I0 is the Bessel function with imaginary argument. The probability of detection given by the integral of Eqn. 7.7 with the Rician pdf has no closed form solution. There are many approximate solutions in the literature [39], [59], of which the North approximation is perhaps the simplest, q q 1 PD = 2 erfc ln (1=Pfa) , SNR + 1=2
(7.11)
As expected from the understanding of the generalized Integrity, the probability of correctly detecting a target is a strong function of the SNR. In actuality, it is a function of the energy in the information symbol component compared to the energy in the noise, for each decision process. For the case when only one signal measurement is used to make the decision, this is equivalent to the average SNR, and the equation above can be used to calculate the Integrity (PD ). However, detection based on a single measurement is not often used in modern radar, and by de nition not for MTI. The reason for this is that decisions made using only an instantaneous sample provide no information about the time-varying (motion) properties of the target and do not take advantage of the fact that the transmitted signal may have re ected from the target for an extended period. This, of course, increases the total energy from the target that is available for making a decision. Large improvements can be obtained by processing more samples of the re ected signal. 220
Noncoherent integration For pulsed-radar, as is our interest here, the transmitted signal is a sequence of pulses (modulated on a carrier frequency) and the number of pulses that re ect from the target is the product of the pulse repetition rate (PRF) and the dwell time, Td
np = Td PRF
(7.12)
There are several ways to use these np pulses to improve the detection. The simplest is to use some kind of linear-weighted integrator after the envelope detector to smooth the variations in the noise. This is noncoherent integration since all phase information has been removed by the envelope detector. The most common type of noncoherent integration is the uniform-weight integrator, discussed by Marcum [61] and Swerling [62]. If si is the voltage at the output of the envelope detector after receiving the ith radar pulse, then the uniform-weight integrator computes the sum,
s=
np X i=1
si
(7.13)
The eect of this operation is to lower the SNR (of each sample) that is required for detection. This is understood by noting that noncoherent integration is a smoothing process [59]. When np independent signal-plus-noise samples are summed, the standard-deviationto-mean ratio is reduced by pnp relative to the variation of the individual samples. It is this smoothing of the noise that improves the detection performance of noncoherent integrators. Speci cally, since the hard-decision rule is based on a threshold value, smaller variations allow the threshold to be placed closer to the mean value of the signal-plus-noise while maintaining the same false-alarm probability. With this smaller threshold-to-mean ratio, a smaller signal can produce a threshold crossing and the sensitivity of the system is improved. The improvement of noncoherent integration for detection is therefore primarily determined by how well the noise variation can be reduced, and to some extent is independent of the actual signal characteristics. This means that noncoherent integration provides a reasonable processing gain even when the signal has a random phase and is rapidly uctuating, as is the case with dynamic targets [59]. This distinguishes it from coherent integration (discussed in the next section) that requires the signal to have predictable or measurable phase characteristics. The probability of detection for a noncoherent integrator followed by a threshold detector is still given by Eqn. 7.7, but now the probability density function g1(x) is more complicated and the actual form of g1(x) depends on the statistics of both the signal and 221
the noise. A great deal of literature exists to estimate the eects of target uctuations, and the classi cations of the severity of the uctuations as de ned by Swerling [62] have become the accepted standard. Within this classi cation system, the Case 2 Swerling model, corresponding to a rapidly uctuating target that gives signal uctuations from pulse to pulse, is considered the most likely (worst case) scenario for the detection for moving targets. There are a confusing number of published approximations to the integral of Eqn. 7.7 under these conditions. A well accepted approximation that appears to match observations and exact (numerical) solutions very closely is given by Neuvy [63]. For noncoherent integration of N pulses the detectability of a Swerling 2 target can be approximated by,
1
log10 P D
= log2=103 (nfa) np SNR
!1=
(7.14)
where = (1=6) + exp (,np =3) and nfa is the false alarm number,
nfa = ln (1ln,0:P5 ) fa
(7.15)
The probability of detection calculated from this relationship is plotted versus the SNR (per pulse) in Figure 7-2 for Pfa = 10,6 with various values of np . 1 0.9
Probability of detection PD
0.8 0.7 0.6 0.5 0.4 0.3 np=2 np=4 np=6 np=8 np=10
0.2 0.1 0
0
2
4
6
8 10 SNR (dB)
12
14
16
18
Figure 7-2: The Neuvy approximation [63] for the probability of detection of a Swerling
2 target using noncoherent integration of pulses and a simple peak detector; Pfa = 10,6 222
Coherent integration In many modern radar systems it is possible to control, or at least compensate for the phase characteristics of the signal. Coherent integration of the pulses is obtained by matching the IF lter to the entire pulse sequence from the target, requiring that the signal has a predictable phase relationship (coherence) over this period. A distinction is made between coherent integration and coherent processing; the former is a special case of the latter that involves coherently summing sequential samples of the signal before the envelope detector. Coherent processing simply speci es that multiple samples are processed ahead of the envelope detector, utilizing the phase information in the signal to improve detection. Other types of coherent processing are pulse compression in which the bandwidth of the pulses are spread to improve range resolution, and synchronous detection in which the actual decision making is performed coherently, with no envelope detection whatsoever. Coherent integration requires that the phase response of the lter brings all signal components into the same phase when they are added [39]. For a pulse sequence, the lter must be matched to the pulse-to-pulse phase variation of the target. The random uctuations in the starting phase between pulses of the transmission can be compensated in the receiver using a reference signal from a stable oscillator locked to each transmit pulse. However, phase variations also occur due to the target's motion (through the Doppler shift), and this is not known a priori. Coherent integration must therefore involve several parallel receive channels tuned to slightly dierent frequencies to account for all possible doppler shifts. This is the basic principle behind pulse-doppler radar, to be discussed in more detail later. For now, it suces to appreciate that only radar systems that account for target motion eects can practically achieve coherent integration. The consequence of this is that there is an eective limit on the duration over which the integration can be performed for highdynamic targets. It is a common rule of thumb that a receiver can maintain coherence over time scales of around 50ms for most conventional targets. The bene t of coherent integration is an np -fold increase in the SNR compared to single pulse incoherent detection. Essentially, the signal amplitudes add coherently so that the amplitude of the resulting signal is np times the amplitude of each signal pulse, and the signal power is increased by a factor of n2p . Receiver noise has a random phase and amplitude from pulse to pulse. As a result, the summed noise amplitude may or may not exceed the amplitude of individual pulses. On average, the power level of the noise is increased by a factor of np as a result of the coherent summing, hence the np -fold improvement in terms of SNR. As noted earlier, noncoherent integration bene ts from reducing the variation of the noise uctuations. Coherent integration, however, is more simply an improvement in the 223
SNR. Coherent integration does not in fact reduce the noise variation, since both the mean noise power and the standard deviation increase by the same factor np . Thus, to achieve the same Pfa , the threshold setting must be the same as for the single pulse detection. However, the signal power has increased, and the detection capability is the same as that which would be achieved with a single pulse that was np times longer. The probability of detection can then be calculated using Eqn. 7.11, but with a SNR that is a factor np higher than for a single pulse. This is not however the only option. Although the coherent dwell time is restricted to be within the coherence time of the target, there is no such restriction placed on noncoherent integration dwell time. The two techniques can thus be combined; after performing some level of coherent integration, limited by the requirement for coherence, several integrated pulses can be added noncoherently to further improve the detection characteristics. The resulting probability of detection for Swerling 2 targets can be approximated by substituting the coherently integrated SNR into Eqn. 7.14,
1
10(nfa) = 2log = 3 ni nc (SNR)
!1=
log10 P (7.16) D where ni is the number of \pulses" that are summed noncoherently, each having a signal to noise ratio of nc SNR from the coherent integration of nc separate radar pulses with signal to noise SNR. This is the method most often used in military surveillance radar since it achieves many of the bene ts of each type of integration. A nal alternative for the detection is to not use an envelope detector at all. The detection process can take place completely coherently. By demodulating the ltered received signal with a synchronous replica of the transmitted carrier, the output is the baseband pulse modulation multiplied by a sinusoid at the beat frequency between the reference and the phase distorted carrier. The presence of this beat frequency is an immediate indication that the target is moving, since a stationary target will not distort the phase of the carrier wave. This is the basic principle behind MTI radar, that will be discussed later. Note that there is a dierence between the MTI mission, that simply speci es that the radar identify moving targets, and the MTI radar, that represents one such solution to this mission need.
7.2.5 Clutter-Limited Detection So far all the discussion has involved detection that is limited by the eects of thermal noise. Unfortunately, for radars designed to look toward the ground, as for the GMTI mission, the noise power is insigni cant compared to the power of the clutter returns. Main-lobe clutter is the energy backscattered from the Earth's surface within the footprint of the main beam. Main-lobe clutter is particularly severe since it is ampli ed by 224
the same antenna gain as the signal itself. The received signal-to-clutter ratio (SCR) for main-lobe clutter is therefore,
S T C =
(7.17)
= Ac o
(7.18)
where T and are the eective radar cross sections (RCS) of the target and the clutter respectively [57]. The eective RCS of the main-lobe clutter is given by,
where Ac is the area of the Earth's surface illuminated by the radar, and o is the average clutter cross section per unit area. From Figure 7-1, the illuminated area for each range bin is, Ac = RsAZ Na c2 sec (7.19)
where Rs is the radar range to the surface along the line of sight, and Na is the number of range ambiguities in the antenna footprint. The main-lobe clutter is therefore directly proportional to the number of range-ambiguities in the footprint. The clutter power C can be computed with a variation of the radar range equation [58],
e C = Pt GA (7.20) 2 (4 ) R4s Ls where Pt is the transmitter power, G is the antenna gain, Ae is the eective aperture area of the antenna, and Ls are system losses.
Sidelobe clutter is the energy backscattered from the Earth's surface outside the mainbeam footprint, and enters the antenna through the sidelobes. The sidelobe clutter level relative to the main-lobe clutter is determined by the main-lobe to sidelobe level. To minimize this component of clutter, antennas with very low sidelobe gain are therefore desirable. The surface clutter cross section per unit area, o is a function of many parameters including the type of terrain or sea conditions, frequency, polarization, and grazing angle, as well as radar parameters, such as angular resolution, bandwidth, pulse waveform and clutter processing techniques. The enormous number of variables involved in characterizing the clutter has meant that most of the research on this issue is empirically based. Extensive databases for dierent values of o for many possible conditions exist in the literature. A great deal of this is reproduced in the Space Based Radar Handbook [57]. One important aspect of the variability of o is that in general, it increases dramatically at large grazing angles. This means that space-based radars characteristically have a \nadirhole" extending 20o {30o from the nadir direction, in which the SCR is too large for reliable 225
detection [57].
Clutter statistics For analysis and prediction of the performance of radar in the presence of clutter, it is necessary to have models for the statistics of the clutter amplitude. The simplest and most analytically convenient model of the clutter amplitude is to assume it has Rayleigh statistics. This is basically an assumption that the clutter return is the superposition of the returns from a large number of independent, random scatterers. The in-phase and quadrature amplitude uctuations are then described by a Gaussian pdf, and the envelope of the clutter has a Rayleigh distribution. The validity of this assumption is dependent on the terrain, and on the radar resolution; the radar resolution cell (range and azimuth) must be large compared to the characteristic size of the scatterer variations. This is also sensitive to frequency, since the dominant scatterers are frequency dependent. The convenience of assuming Rayleigh clutter statistics is that target detection in the presence of clutter can be treated in exactly the same way as for Gaussian noise. The equations of the previous sections can all be applied, with the only modi cation that the SNR is replaced by the total signal-to-interference ratio (SIR),
SIR = N S+ C
(7.21)
where S , N and C represent the signal, noise and clutter powers respectively. If Rayleigh conditions cannot be met, a log-normal distribution of clutter been proposed [58]. This is not discussed further, since all the analysis performed in this chapter assumed Rayleigh clutter statistics.
Impact of clutter The real problem with clutter is that it is usually coherent between pulses. This is particularly true for the returns from stationary clutter, which is more coherent pulse-to-pulse than the moving targets that the radar is trying to detect. As a result, the integration processes described earlier do not help at all in terms of improving the SCR. Clutter, covering the entire resolution cell, and not being designed to be stealthy, has returns that are characteristically much larger than the target, and can simply overpower the target signal. There is however a way to attack the clutter problem, and that is to use the fact that the targets move, but the clutter does not. Recall from Chapter 4 that eective Isolation requires signals that are separable in some domain; motion, leading to frequency separation is the key to dealing with clutter, and it is applied using pulse-doppler radar. 226
7.2.6 Pulse-Doppler Radar Pulse doppler processing is an implementation of coherent integration and uses the fact that the targets are assumed (or known) to have a radial velocity vt relative to the radar antenna. This velocity gives rise to a Doppler frequency shift f = 2vt = in the re ected signal that can be used to assist the detection process and identify the target. Essentially, the radar \looks" for signal components that are frequency shifted, and performs the detection processing on each of these components separately. The process by which this occurs in a standard pulse-doppler radar is simply explained. The key to understanding pulse-Doppler processing lies in understanding the form of the frequency spectrum of the received radar signal, an example of which is shown in Figure 7-3. This continuous spectrum is the fourier transform of a single square pulse convolved with an in nite impulse train (representing the pulsed transmission) and multiplied by a square window corresponding to the length of time Td that the radar dwells upon the target. The peaks of this spectrum are spaced by the PRF, and are of width 1=Td. 1
0.8
Normalized amplitude
0.6
0.4
0.2
0
-0.2 -1.5
-1
-0.5
0 Frequency (Hz)
0.5
1
1.5 4
x 10
Figure 7-3: Frequency spectrum of a sequence of square radar pulses; PRF=3000Hz,
pulse length = 1=12000 seconds, and dwell time Td = 1=300 seconds (10 pulses)
After downconversion to IF, this received signal is split by a bank of nr range gates, as shown in Figure 7-4. These range gates are lters that open and close at intervals of time corresponding to dierent ight times, equivalent to target range. The output from each of these gates is fed into a system of nf narrow bandpass lters 227
IF Amplifier
Range gate 1
Range gate nr
NBF Filters
NBF Filters
Amplifier detectors
Amplifier detectors
Figure 7-4: Simpli ed block-diagram for pulse-doppler radar processing (NBF) centered around the middle of the spectrum of the signal (the carrier frequency shifted to IF). Each lter has a bandwidth fD equal to the width of the peaks 1=Td in the signal spectrum, with a center frequency at some anticipated shift, left or right, away from the middle peak. The idea is that one of the lters will \capture" the Doppler-shifted central peak. The output of this bank of lters is essentially the nf -point discrete fourier transform of the input signal, and is in fact often implemented digitally with an FFT. Detection processing is performed on the narrowband output from each of these lters, and the system measures the Doppler shift, and hence the target's radial velocity, by identifying the lter that declares a detection. Since each lter has a bandwidth fD equal to the width of the peaks in the spectrum, the maximum number of lters that can be used to measure the shifted central peak is limited by the spacing of the peaks in the spectrum. As described earlier, this is equal to the PRF, and so the maximum number of lters is,
nf = PRF f = PRF Td = np D
(7.22)
Adding more lters than this simply places a lter directly over one of the other peaks 228
in the spectrum, and provides no additional information about the signal but represents an ambiguity about which peak is being measured. Even with the correct number of lters, these ambiguities can occur due to the Doppler shift of the target; it is assumed that a large output from one of the lters measures a Doppler-shifted central peak, but we cannot be sure that it is not due to a shifted version of one of the other peaks. For example, the return from a very fast moving target may have shifted the entire spectrum by an amount equal to the PRF. The large output from the lter centered at zero frequency (relative to the IF) would incorrectly indicate that the target has zero radial velocity. This Dopplerambiguity is unavoidable, but its impact can be reduced by increasing the PRF such that the peaks are so far apart that no reasonable target can a have such a speed as to create ambiguities [60]. Unfortunately, this comes at a price, since increasing the PRF gives more range ambiguities. The radar engineer is stuck between a rock and a hard place and can only hope to obtain a compromise. Of course, all this trading of ambiguities (or accuracies) in range (time) or velocity (frequency) is actually a statement of the Uncertainty Principle; it is simply impossible to reduce the total amount of ambiguity in both domains. It is perhaps possible that higher PRF's can be used if, as part of a pulse compression scheme, each sequential pulse is spread in bandwidth and encoded with a dierent orthogonal PRN code. For instance, pulse 1 is encoded with code1, pulse 2 with code 2, etc. The number of separate codes that are used determines how often they repeat, which sets the number of range ambiguities. More codes translate linearly into fewer ambiguities. After range gating the encoded pulses, the code-modulation can be removed with a set of synchronous reference codes, and sequential pulses can be integrated as usual in the PulseDoppler processor. The number of Doppler ambiguities is therefore unchanged. This would seem to achieve exactly what was just stated as being impossible; uncertainty has been removed in both domains. How can this be? Well, the answer is, as always straightforward when one understands the problem correctly. The Uncertainty Principle has not been violated since the initial conditions have been changed. By modulating the pulses with a set of orthogonal codes, the total amount of information in the radar signal has been increased linearly. We could expect therefore, a linear reduction in the ambiguities. Whether or not this innovative (crazy?) idea can be realized in practice is not known, but the potential bene ts it oers are great, since the PRF can be given any value to optimize detection. If the radar platform is moving relative to the ground, as all non-geostationary satellites do, the clutter returns from the ground also have a Doppler shift. The absolute Doppler shift is related to the projection of the platform velocity vector into the line of sight. Using the coordinate system of Figure 7-1, the absolute doppler shift for a point on the ground in the direction (AZ; EL) from the satellite is therefore given by (ignoring Earth rotation 229
and curvature),
fd = 2vp cos(AZ )sin(EL)
(7.23)
This means that the velocity and hence Doppler shift varies across the beam footprint, since dierent parts of the ground have dierent radial velocities compared to the antenna. For example, with a side-looking radar, the ground at leading edge of the beam is moving toward the radar, while the ground at the trailing edge is moving away. In this case, the doppler spread fd across a beam of half-width can be determined by calculating Eq 7.23 at the leading and trailing edges of the beam where AZleading = + and AZtrailing = , , and dierencing the result, (7.24) fd = 4vp sin(EL) sin() The same principle holds for forward-looking radars, where there is a velocity dierence between the heel and toe of the beam due to a dierent elevation angle. In both cases, the spread in Doppler shifts across the beam is directly related to the beam width, either in azimuth (for side-looking) or in elevation (forward-looking). Clutter signals entering the receiver through sidelobes are shifted by an amount dierent to the main lobe clutter due to the greater angle away from boresight. The net result is that the clutter signal is spread over a nite bandwidth, and this fact is important for MTI. Recall that pulse-doppler radar splits the input signal into nf separate narrowband signals, and so a dierent part of the clutter spectrum is passed through each lter. This has several consequences:
It is usually assumed that the lters that capture the main-beam clutter (around zero
frequency relative to IF, after compensating for platform motion) are swamped by this clutter, since it has been ampli ed by the high gain of the antenna. These lters would have been the ones to detect very slow moving targets, since a stationary target has the same absolute velocity as the clutter. As a result, the frequency spread of the main-beam clutter determines the smallest radial velocity that a target can have and still be detected. This minimum detectable velocity (MDV) is hence directly related to the beamwidth of the antenna, and can be estimated by equating the doppler shift of a target with velocity vp =MDV, to the doppler shift of the beam-edge clutter. For a side-looking radar this is,
MDV = vp sin() sin(EL)
(7.25)
The amount of clutter that competes with the signal from a target is the sum of the
component of the clutter spectrum within the bandwidth of a single NBF, and the aliased clutter components that have Doppler shifts equal to the translates of this 230
passband by integer multiples of the PRF. These aliased clutter components fold into the passband of the NBF's through the Doppler ambiguities of Figure 7-3. Clearly, smaller lter bandwidths, corresponding to long dwell times, reduce the total amount of clutter that can compete with a target signal. Recall there is a practical limit of approximately 50ms placed on the dwell time from considerations of phase-coherence. Also, higher PRF's reduce the number of aliased clutter components that can fold in to the lter bandpass. Of course, this comes at the expense of range ambiguities. The actual detection for each channel can be performed by a basic peak detector, and most often features an additional level of noncoherent integration on the (coherent) output of the pulse-doppler processor, as described in earlier sections. An alternative to the incoherent peak detector is to implement a synchronous detector. Adding this to the pulse doppler output creates the so-called MTI radar. The advantages of this approach are that it can be used to obtain very accurate measurements of target velocity. The cost is a loss in processing gain, due a mismatch loss [59], and an extremely complicated implementation.
7.2.7 The Potential of a Symbiotic Distributed Architecture There are therefore some implicit characteristics of space-based radar that make the detection of ground moving targets very dicult. The most important of these are the isolation and integrity problems of reliably detecting and locating the targets to a high spatial resolution, while rejecting clutter and other interferers. These problems have led to proposed spacecraft designs that feature either very large apertures or very complicated and expensive (adaptive) clutter-processing schemes. As was suggested in Chapter 3, symbiotic architectures can oer improvements in both isolation and integrity compared to singular deployments, and can do so using reasonably modest satellite resources. It was suggested in that chapter that for the search mission, the most bene cial architecture is one that uses independent wide-angle beams on transmit (to achieve a high search rate) but coherently forms many simultaneous receive beams using the signals from all the satellites, to achieve high gain and clutter rejection. The creation of a large sparse array from a symbiotic cluster of formation ying small-satellites can therefore lead to improved capabilities by supporting a very narrow main-lobe beamwidth. This has the eect of increasing the ground resolution, reducing the main-lobe clutter and the MDV. Also there will be no range ambiguities in the main-lobe, and the PRF can be increased. This last point is somewhat countered by the characteristically high sidelobes of sparse arrays, so that even range ambiguities in the sidelobes contribute signi cantly to clutter. Nevertheless, the possible improvements oered by such a system, together with the potential for cost savings from using smaller satellites at lower orbital altitudes, make 231
it worthy of investigation. The Air Force Research Laboratory has begun a study to do exactly this, incentivized by the Scienti c Advisory Board's suggestion that developing and deploying these distributed technologies is a primary goal for the Air Force in the 21st century. Techsat21 is the name of the proposed design, and it basically involves using symbiotic clusters of small (or micro-) satellites to perform the GMTI mission.
7.3 The Techsat21 Concept Techsat21 relies on forming large sparse arrays from clusters of formation ying satellites, each weighing less than 100kg, for the detection of ground-moving targets in a strong clutter background. To achieve the desired level of coverage, several clusters can be deployed, so the system can be classi ed as a clustellation within the GINA framework. An artist's impression of the system on orbit is shown in Figure 7-5, based on the current genesis for the design. The gravity gradient satellites will feature several state-of-the-art technologies including Micro-Electro-Mechanical-Systems (MEMS), advanced solar arrays and batteries, modular transmit/receive modules, and, most importantly, very fast microprocessors.
Figure 7-5: Artist's impression of the operational Techsat21 system [48] Operationally, the satellites receive and process the returns not only from their own transmitters, but also the bistatic responses from the orthogonal transmit signals of the other satellites in the cluster. Since each satellite has a dierent geometry to the target, the phase of the sampled radar signals are dierent for each path from satellite{ground{ satellite. This permits multiple simultaneous high resolution, high gain receive beams to be created during post-processing, supporting a greatly enhanced isolation and integrity capability. The key phenomenology is therefore that of sparse signal-processing arrays, and its principles must be understood before any further analysis or design can be introduced. 232
7.3.1 Signal Processing Arrays In many modern remote sensing systems, the directional properties of the system (angular resolution, spatial ltering) is not only a function of the antenna, but also of the processing of the signals received or transmitted [34]. These \signal-processing antennas" include, but are not limited to: synthetic aperture antennas that involves sweeping out a large synthetic aperture using the motion of a real aperture; interferometers that combine the signals from two widely spaced apertures; and sparse arrays, the primary interest here, in which antenna patterns equivalent to large lled apertures are reproduced using much fewer widely separated antenna elements. The possible methods by which sparse arrays can be formed are numerous. Radio astronomers have long used the concept of a multiplying array in which signals from two sub-arrays are multiplied during post-processing. This is in fact the reason that sparse arrays are classi ed as signal-processing arrays. However, the concepts discussed in this chapter involve only additive arrays, in which the signals from array elements are added coherently. Before discussing the dierent types of sparse arrays considered, it is helpful to review the mathematics that are used to calculate the directional properties of all arrays.
Arrays and the concept of spatial frequency An array is an aperture excited at only discrete points or localized areas. The array consists of small radiators or collectors called elements. If the element radiation signal strengths are I and are located at positions x, then the general aperture excitation can be written:
I (x) = This can also be written as:
X
n
In (x , xn )
I (x) = i(x)s(x)
(7.26) (7.27)
where i(x) is an underlying current density and s(x) is a sampling function:
s(x) =
X
n
(x , xn )
(7.28)
The far eld radiation pattern f (u) of such an array is given by the Fourier transform of the excitation. If all elements are illuminated with uniform strength I (x) = s(x) and so,
f (u) = F fI (x)g = =
Z
I (x)ejkxu dx
Z X N
n
233
(x , xn )ejkxu dx
(7.29) (7.30)
=
N X
ejkxn u
(7.31)
where the fourier variable u = sin. Note that f (u) is the far eld radiation pattern in terms of the electric eld intensity. To convert to a power response that would be measured by a square-law detector, f (u) must be squared. Alternatively, from basic Fourier transform relationships, the square law process corresponds to an auto-convolution of the aperture excitation, yielding the spatial frequency spectrum. The far eld power response can be obtained directly by taking the Fourier transform of the spatial frequency spectrum. These relationships are shown in Figure 7-6.
FT
Aperture (excitation) distribution
Amplitude response Square-law
Auto-convolution
FT
Spatial frequency
Power response
Figure 7-6: The relationship between the aperture distribution, the far- eld amplitude
response, the spatial frequency and the power response.
The previous development assumed that the array consisted of discrete point-source apertures, but accounting for directional elements is simple. If the array consists of identical antenna elements de ned by a current density ie (x), then the far- eld pattern of a single element is given by the fourier transform F fie (x)g. Z
e(u) = F fie (x)g = ie (x)ejkxu dx
(7.32)
Since the elements are located at positions x = xn , the current density (excitation) across the array is: N X ie (x) = ie (x , xn) (7.33) 234
The radiation pattern of the whole array is the Fourier transform of this current density:
g(u) =
Z X N
ie (x , xn)ejkxu dx
(7.34)
Substituting y = x , xn gives:
g (u) =
Z X N
ie(y )ejk(y+xn )u dy
N X
Z
= ejkxn u ie (y )ejkyu dy = f (u)e(u)
(7.35) (7.36) (7.37)
The rst term is the array pattern and is de ned by the geometric properties of the array. The second term is the element pattern, and is a function of the excitation of the element. Eqn. 7.37, called pattern multiplication, therefore decomposes the array pattern into properties due to the array geometry and properties due to element excitation. This shows that grating lobes can be suppressed if they lie outside of the element pattern. Eqn. 7.37 is general and can be applied to many scenarios. For example, multiple elements can treated as a single super-element (or subarray). Their compound pattern can then be de ned as the pattern of the super-element. Stacking these super-elements together and applying Eqn. 7.37, we can determine the far- eld pattern of a two-dimensional planar array. Each column of N elements of the planar array is considered a single super-element. The twodimensional array is treated as a linear array of M super-elements, having pattern f2 (u). By the principle of pattern multiplication, the array pattern is f (u; v ) = e(u; v )f1(v )f2(u), where each element has pattern e(u; v ) and f1 (v ) is the pattern associated with a linear array of super-elements.
Sparse arrays Having now laid out the mathematics, two possible options for forming sparse arrays applicable to Techsat21 can be introduced. The common link between both of the sparse array concepts presented here is that their element spacings are aperiodic. If the elemental spacings were periodic, there would be unwanted grating lobes in the far eld response.
Random Arrays
The random array is a sparse array with random positions of the array elements. Consider a linear array of N elements, their positions xn being randomly distributed along a line of length Da. Assume that all elements, regardless of their locations, are properly phased such that they form a main lobe of maximum strength along some 235
direction 0 (rede ne u = sin , sin0 ). The complex far eld response is given by Eqn. 7.31, where the xn are randomly distributed. The main lobe amplitude is N , with u = 0, independent of the random locations. The width of this main lobe is mostly unaected compared to a regularly spaced array, scaling with =Da. Away from the main lobe however, the phase angle kxn u is a random variable, due to the randomness of xn . Hence the unit vectors combine with random phases. The RMS p amplitude grows as N and the power as N . Thus the ratio between power in the main lobe to the random sidelobes is N=N 2 = 1=N .
Minimum Redundancy Arrays Consider a conventional array of N elements, as in Figure 7-6. The spatial frequency
of the array shows how the response is made up from constituent components each related to the inter-element spacing of pairs of elements. It can be seen that for a regular array there are redundancies in the spatial frequencies. The short spatial frequencies have many components corresponding to numerous pairs of elements with small separations. Conversely the longer spatial frequencies, that can only be created by pairs of elements at each end of the array, have fewer components. Since each of the pairs that correspond to a given spatial frequency contribute identically to the eventual radiation pattern, it can be argued that element spacings should not be duplicated, as this corresponds to a waste of elements [64]. In terms of the response to a single source, this is quite true. An array that does not duplicate its inter-element spacing, having only one spatial component for each line in its spectrum, is known as a minimum redundancy array. For example, a 4 element minimum redundancy array has elements in positions x1 = 0, x2 = 1 , x3 = 4 and x4 = 6. This arrangement, given the notation f1 3 2g to indicate the spacings, has all elemental separations between one and six, the same as a regular array of six elements, but achieves it with two fewer elements. The obvious bene ts in cost and complexity from having fewer elements have made the minimum redundancy arrays popular with the interferometry community [65]. For the spacecraft array concepts, such as Techsat21, reductions in the number of elements translates directly into a reduction in satellites and can dramatically lower costs. For this reason they are worth pursuing. The problem of arranging N elements along a line such that their spacings are nonredundant was rst addressed by Leech [66] in the context of number theory to de ne \restricted dierence bases". In this work, the spacings for minimum redundancy arrays are given up to N = 11, and are reproduced in Table 7.1. Note that there are two types of minimum redundancy array; the Unrestricted (or General) array, in which the maximum separation is allowed to increase to whatever value is necessary in order 236
to maximize the total number of sampled spatial frequencies; and the Restricted array, in which the spacings are set to maximize the number of contiguous spatial frequencies, accepting a penalty that some spatial frequencies are duplicated. The best option for forming sparse arrays for remote sensing is not clear, and a goal of this study was to quantify the characteristics of each option for the Techsat21 system. Table 7.1: Minimum redundancy arrays, up to N = 11 elements; the number sequence indicates relative spacings N Unrestricted 3 1 2
Restricted 1 2
4
f
f
5
f
g
f
g
f
g
f
g
f
6
7
f
1 3 2
4 6 1 1
f
f
f
f
f
1 1 3 7
1 2 6 3
7 2 2 2
3 8 5 4
1 1 4 4 3 1 5 3 2 2 1 3 1 6 2
g
f
g
g
f
g
g
1 1 1 1
g
f
g
f
g
f
f
g
f
8 10 1 3 2 7 8
g
f
g
f
1 1 1 1
4 1 6 6
4 5 4 4
4 5 2 3
3 4 3 2
g g g g
1 1 9 4 3 3 2 1 3 6 6 2 3 2
g
f
g
f
g
1 1 12 4 3 3 3 2 1 3 6 6 6 2 3 2 1 2 3 7 7 4 4 1
16 1 11 8 6 4 3 2 22 7 15 5 1 3 8 2 16 7
f
f
11
g
6 3 1 7 5 2 8 1 3 6 5 2 14 1 3 6 2 5 13 1 2 5 4 6
f
g
1 3 3 2 1 1 4 3
9 10
1 3 2
g
3 1 5 2 4 1 2 6
f
8
g
f
g
g
f
g
1 2 3 7 7 7 4 4 1
g
f
g
g
18 1 3 9 11 6 8 2 5 28
f
f
g
1 2 3 7 7 7 7 4 4 1
f
g
Both of the array concepts presented in this section have involved additive processing. An alternative is to use multiplicative arrays in which the signals from pairs of apertures, or from sub-arrays, are fed into a circuit that produces an output proportional to the product of the two input signals. This has the advantage that it can provide angular resolution twice as ne as additive arrays but can often lead to increased sidelobes [64]. The dierences between additive and multiplicative processing are most easily seen by comparing their outputs in terms of the signals received by their antennas. For a two element additive array 237
receiving signals e1 and e2 , Power output = e21 + e22 + 2e1 e2 For omnidirectional antennas, the rst two terms in this equation do not contribute to the high resolution angular information. The cross-term is the only contributor to the angular resolution, and this is, in fact, precisely the output of a multiplicative array [64]. Note that the two self-product terms do contribute to the SNR, and so assist in the detection of a target (integrity), but not to its angular location (isolation). Nothing further will be said concerning multiplicative arrays, since they were not considered for this study, although future research should address their applicability to the Techsat21 program.
7.3.2 Overall System Architecture The previous sections have introduced the concepts that will be used to choose an array type for the Techsat21 clusters, but little has been said concerning the overall implementation of the concept.
Implementing the sparse array with a satellite cluster Most of the published work on sparse arrays assumes that there are hard-wired electrical connections between the separate antenna elements. This is, of course, not the case for Techsat21, in which each individual satellite represents an element of the array. The problems and issues that this fact raises are primarily concerned with coherence, bandwidth and processing load. Consider the basic Techsat21 architecture shown in Figure 7-7. Each of the ns satellites transmits a dierent orthogonal radar signal that is received at every other satellite. The ns time domain signals received at each of the ns satellites must be eventually delivered to the location at which the array signal processing will be performed. For now, do not worry too much about where and what this processor will be, since it will be discussed later; simply assume that it exists, and that the signals have to be transmitted there. To be able to form the coherent array, and for pulse doppler processing, the signals cannot undergo any integration prior to their delivery to the processor. This means that the cluster satellites must be able to digitally record each of the received signals, preserving all the carrier phase information. This can be done at IF after mixing (provided that the phase of the original carrier can be reconstructed) and this reduces the processing and storage requirements somewhat. After recording the signals, the digital data can be used to modulate a high frequency carrier for transmission to the processor node/nodes. To create the sparse array directivity pattern, the array processor must then reconstruct all n2s radar signals at their carrier frequency, and coherently sum their amplitudes. This is a non238
Detector
Pulse-Doppler processing
Array Processing Must preserve phase sat1
n s -1
sat2
sat ns
Figure 7-7: Simpli ed Techsat21 Radar Architecture trivial task at militarily useful radar frequencies (X-band 10GHz). Nevertheless, assuming that this is performed satisfactorily, the target information can then be extracted from the single channel output of the array processor using a standard pulse-doppler processor. Returning now to the question of where to place the array processor, the available options are that it could reside on the ground, on a single satellite, or be distributed among the cluster satellites. The rst two options require enormous processing power and represent a single point of failure, while the latter involves a great deal of complexity. One of the goals of the generalized analysis is to quantify the impact of processor placement in terms of performance and cost.
Dimensionality of the array The dimensionality of the array has not yet been mentioned. In actual fact, the Techsat21 clusters could feasibly be deployed and maintained in one, two, or even three dimensions, representing an array along a line, across a 2D plane, or within a 3D volume. The optimum 239
architecture will depend strongly on the mission requirements, the number of satellites in the cluster and the capabilities of each satellite, as well as orbital parameters and issues with the formation ying. One-dimensional clusters are perhaps the least complicated option, and can be formed by a simple train of satellites traveling in a single orbital plane. By looking to the side, perpendicular to the ight direction (AZ = =2), a sparse array is formed over the maximum extent of the satellite cluster, from the lead satellite to the trailing satellite. The length Dc of the cluster can be chosen freely to maximize radar performance, and since there are little or no tidal forces that act to distort it, the con guration is static in time. This has an important bene t in that it reduces the variability of the detection capabilities, thereby improving the availability. In addition, there are no real propulsion requirements beyond those necessary for absolute station-keeping, and the cluster is fail-safe, in that it requires no active control to maintain its con guration. The disadvantage of this architecture is that the array provides good angular resolution only in the azimuth direction,
AZ =Dc
(7.38)
while the beamwidth in elevation corresponds to the aperture size Ds of the individual satellites, EL = =Ds (7.39) Range ambiguities in the main-lobe are therefore not suppressed (since the main lobe has a large cross-track extent) and the PRF is limited by Eqn. 7.2 to small values. The ne resolution in azimuth does however provide for a very small MDV. The clutter suppression in any range bin is a function of the sidelobe level, which is a strong function of the number of satellites and their spacing. The next level of complexity would be to form two-dimensional arrays. These oer a huge bene t in terms of oering two-dimensional angular resolution, so that at least in theory, the radar can be operated in a range-ambiguous mode (range ambiguities outside the main-lobe are suppressed by low sidelobe gain). This permits high PRF's and consequently improved clutter suppression. The MDV can be tiny, and the location accuracy of the target greatly improved over anything possible with singular deployments. All of these bene ts come at the cost of increased complexity and more dicult cluster management. It has been shown [28], that realistically achievable cluster con gurations can be formed in free-orbits, provided some amount of array tilt can be tolerated. The main problem with using two-dimensional clusters in free orbits is the dynamic nature of the array. Although array distortion can be limited by proper orbit selection, and actively controlled using propulsion, there will be times when the cluster is in a sub-optimal con guration. 240
The sensitivity of the radar capabilities to distortion or rotation of the array have not yet been determined. It is hoped that future studies will address this issue. There are therefore many system variables that are important to the success of Techsat21. Just summarizing the ones that have already been discussed (in no particular order):
The number of satellites in the cluster obviously aects the directivity and gain of the coherent array, which can impact all aspects of the radar capabilities.
The array con guration, in terms of extent, spacing and dimensionality is critical, for the same reasons.
The number of clusters deployed in the clustellation determines the statistics of the coverage over a theater.
The PRF is the critical parameter in the clutter suppression, being related to the number of Doppler and range ambiguities that add to competing clutter. Large PRF's reduce the Doppler ambiguities, but small PRF's reduce the range ambiguities. The overall eect is very coupled to the array pattern, the waveform, the dwell time and the clutter variations.
The dwell time on a target sets the number of pulses that can be integrated coherently
and also the bandwidth of the Doppler lters, controlling the velocity sensitivity and to a large extent, the clutter suppression. The dwell time is of course limited by the available time over a target, that is itself a function of the orbital parameters. Together with the number of incoherent pulses that are integrated and the beamwidth of the satellite transmit antenna, the dwell time eectively speci es the area search rate of the radar.
The aperture on each satellite aects the transmit beamwidth that dominates the search rate. On the receive side, it aects the roll-o of the array sidelobes. Aperture is obviously important for SNR considerations, but the relative signi cance of this can only be appreciated after also accounting for clutter.
The transmitter power on each satellite directly impacts the noise-limited capabilities
through the radar range equation, but has no impact on the clutter-limited capabilities.
The location of the processing can dominate the feasibility, cost and reliability of the system. Single-node processing may be the least complicated option, but requires enormous processing power and is a single point of failure. 241
This large, but not complete list demonstrates the complexity of even a simpli ed system analysis of Techsat21. Design is even more challenging, since the coupling between the dierent variables is not immediately obvious, and it is unclear what the impact is of changing any one variable. These are precisely the conditions under which the GINA methodology can help. By performing system level analysis, accounting for all the important functionality of the system, the impacts of dierent variables can be fully appreciated. The preliminary design can then re ect all the dierent coupled eects, reducing the potential for costly surprises later in the project.
7.4 Using GINA in Design Trades for Techsat21 The Techsat21 project is still in its infancy, and even the preliminary architectural design has not yet been nalized. To assist AFRL in the de nition of a workable architecture, the GINA methodology has been applied to the problem. This section describes the modeling of Techsat21 within the GINA framework and presents the predicted capabilities for a wide range of possible architectures. The following section takes the candidate architectures that have the best capabilities, and assesses their CPF and Adaptability in order to make intelligent design suggestions.
7.4.1 Goals of the Study A complete analysis and evaluation of all the architectural options available for Techsat21 is most de nitely beyond the scope of this study, and is probably more suited to an entirely dedicated research program. However, the design process has to start somewhere, and if nothing else, an investigation limited to a subset of all possible architectures has merit in eliminating alternatives or identifying viable candidates. This study is therefore limited to the evaluation of designs featuring one-dimensional clusters, and considers only the minimum redundancy arrays discussed in Section 7.3.1. Further work will assess two-dimensional clusters and other sparse array types. Thus, the primary goals of the GINA study for Techsat21 are to quantify the relative importance of the most signi cant architecture variables for 1D cluster con gurations. These are: (1) cluster size, in terms of the number of satellites; (2) array con guration (restricted or unrestricted) and extent; (3) PRF; (4) transmitter power; (5) aperture size of each satellite; and (5) processing location. The real emphasis is on how each of these variables impact the capabilities of the system, rather than on the performance and cost. The reason for this is that, at present, the system requirements have not been well de ned. Furthermore, only by considering the capability characteristics can a feasible architecture be chosen. A shortlist of candidate architectures 242
have been selected, and some performance and CPF results are actually presented, based on an approximate set of system requirements that were agreed upon in meetings with the AFRL. The approach taken in the analysis is to model each alternative architecture within a full test-matrix that covers reasonable ranges of each design variable. This method is not exactly elegant, but is comprehensive and guarantees that the important trends are captured. The test matrix for the analysis is shown in Table 7.2. Table 7.2: Test Matrix for Analysis of Techsat21
Variable Test values Number of cluster satellites 4, 8, 11 Array type Unrestricted, Restricted Cluster diameter 100m, 200m PRF 1500Hz, 3000Hz Aperture size 1m 2m 4m Transmitter Power 100W, 200W, 400W Processor location single satellite, distributed f
g
f
g
f
g
f f f f
g
g
g
g
Note that there are some combinations of variables that cannot be realized. For example, the smallest inter-element separation for an 11 element unrestricted array, is 1=45th of its total length (see Table 7.1). This means that for a cluster that is 100m long, satellite apertures no larger than approximately 2m can be used. Also, although the location of the processor has an impact on the performance and cost, it should have no impact on the capabilities of the system. The trade study for the processor location can therefore be carried out after selecting the candidate architectures that have acceptable capabilities. Note that even with these reductions, the test matrix still has over 200 elements. The modeled system parameters that are constant across all cases are given in Table 7.3. These values are the result of conversations with AFRL. The orbital altitude is chosen to be 800km based on a desire to keep the free-space attenuation low, while being high enough that the satellites have a wide FOV, and restricted to lie below the radiation belts. The radar bandwidth, at 15MHz, is similar to previous designs. A noise temperature of 290o K is typical of ground-looking receivers, and the receiver losses are assumed at approximately 1.5dB. Finally, the target RCS is conservatively chosen to be 10m2 to represent a small automobile.
7.4.2 Transformation of the GMTI mission into the GINA framework Within the framework of GINA, the de nition of the market and the quality-of-service parameters for the GMTI mission are speci ed below:
The market is to detect, locate and track moving targets within speci ed ground 243
Table 7.3: Modeled Techsat21 system parameters held constant across all cases Parameter System Altitude Radar bandwidth Coherent dwell time Target RCS T Receiver noise temperature System losses
Value 800 km 15 MHz 50ms 10m2 290oK 1.5dB
regions. The \users" are therefore ground locations, or cells, of a given size. The system must transfer information regarding the existence of moving targets from these ground cells to military command centers. Operated in a \track-while-search" mode, the radar can construct and maintain the tracks of targets by repeated revisits.
Isolation is speci ed in terms of the ground resolution (the cell size), the MDV and velocity precision of detected targets. The amount of interference from clutter and ambiguities is also related to the isolation capabilities, since they can result in a declaration of a target in an incorrect cell.
Rate is equivalent to the revisit (search) rate of the ground cells within a theater
of interest. This ows directly from the need to track moving targets. The update rate must match the expected target dynamics since slow movers pose a less serious threat and can be updated slowly, while fast moving targets must be updated often to maintain track. Note that this revisit rate speci es the update rate of each cell during the times when the theater is being searched. Any periods of time when the theater is not being accessed are omitted from this analysis. These coverage considerations are largely uncoupled from the radar issues of interest, and are only a function of the constellation/clustellation design.
Integrity is strictly the sum of the probability of detection and the probability of
false alarm for each radar interrogation of each ground cell. This represents the total probability of error. However, search radars are conventionally operated in a Constant False Alarm Rate (CFAR) mode, in which the rate of false alarms is held constant, and the detection threshold oats accordingly. For comparative purposes, the Integrity is therefore de ned as just the probability of detection, at the speci ed CFAR.
Availability has the consistent de nition of being the probability of achieving given values of the other capability parameters. 244
7.4.3 Modeling Techsat21 The network architecture for Techsat21 is, by de nition, a representation of the system architecture. The topology of these networks is unaected by the array con guration (restricted/unrestricted) since the routing of information is not a function of satellite spacing. However, architectures with dierent numbers of satellites have dierent network topologies. The network diagrams used for the generalized analysis for 4, 8 and 11 satellites are shown in Figures 7-8{7-10.
Radar Source
Radar TX
2-Way Spaceloss
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Target & Clutter
Mux Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
TechSat Radar Processing
Sink
Figure 7-8: Network diagram for Techsat21 with ns = 4 satellites
Radar Source
Radar TX
2-Way Spaceloss
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Target & Clutter
Mux Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
Cluster Separations
Radar RX
ISL
TechSat Radar Processing
Figure 7-9: Network diagram for Techsat21 with ns = 8 satellites 245
Sink
Radar TX
2-Way Spaceloss
Target & Clutter In1
Cluster of 11
Out1
TechSat Radar Processing Sink
246
Starting at the left hand side of each of these diagrams, the \Radar Source" module represents the signal generator for the individual radars. Although only a single module is shown, this represents all ns of the cluster satellites. This method of model reduction is possible because each of the orthogonal transmit signals from the ns satellites are uncoupled through the network until being combined by the processor module. Each channel exhibits the same behavior through the system, and only the eects of channel failures need to be modeled. The next module represents the ns radar transmitters, in which the transmit power and aperture size are speci ed. The aperture controls not only the transmission gain of the signals, but also the one-way far- eld radiation pattern that illuminates the theater. The \Two-way Spaceloss" module calculates the r2 attenuation of the signal power that is experienced in each direction, to and from the target. This depends on the constellation altitude and on the grazing angle between the line of sight to the cluster and the target's local horizon. The grazing angle is actually represented as a probability distribution function, as shown in Figure 7-11. This distribution function was obtained by creating a histogram of the grazing angles above a mask angle of 15o for all ground locations within the eld of view of a cluster. There is no nadir-hole constraint placed upon the grazing angle since the ability of the system to detect targets at all angles will be calculated; the grazing angles that lead to SCR so high as to hinder detection will show up in the results as losses of availability. Figure 7-11 thus correctly represents the viewing statistics of a cluster that is actively searching a theater of interest. Of course, there will be times when the cluster is not in view of a theater, but this is not important since the capabilities calculated in this chapter refer only to the times when a given cluster is actively searching. The fact that the elevation angle is represented statistically is the reason that the spaceloss is modeled as a \two-way" loss, and not as two separate \one-way" losses; the statistics for each direction are not independent and so the net loss has the same statistics as either one alone. Returning to the network diagrams, the next module accounts for the target characteristics (T ) and the clutter returns ( ). The power re ected from the target is simply the product of the incident power and T . Since T is assumed to have a constant value (10m2), this does not change the nature of the signal's power statistics.
Figure 7-10: Network diagram for Techsat21 with ns = 11 satellites
Radar Source
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
15
30
60
Grazing angle (degrees)
45
75
90
1 All
247
the network diagrams are screen-shots from the software used to perform the analysis.
At this stage, the clutter return is actually represented as the clutter power per unit area for reasons that will become clear later. This is given by the product of the incident signal power and the average clutter cross section per unit area o. As described in section 7.2.5, o varies strongly with many parameters, but particularly with terrain and grazing angle. Published data [57] for measured variations in o with grazing angle for a typical terrain are plotted in Figure 7-12. The results presented in the chapter use the o variation for farmland terrain. This variation is combined with that of the incident power to obtain the clutter power per unit area as a function of grazing angle. Its statistics can also be determined from the grazing angle probability distribution function. Again referring to the network diagrams, the output of the target/clutter module is split into ns separate information paths for input to the ns satellite receivers via their associated \Cluster Separation" modules. The number of these paths through the receiving array is the only dierence between the diagrams of Figure 7-8{7-10. Note that Figure 7-10 is drawn with only a single module that represents the entire receiving array; this was done for diagramatical simplicity and the actual topology \underneath" this block looks just like the other networks but with 11 satellites. The \Cluster Separation" modules represent the array con guration and are used to input the relative positions of the dierent satellites. They appear in the diagram only because the receiver modules are standardized modules1 with no input eld that speci es position. The receiver modules specify the receiver antenna size (usually the same as the
Figure 7-11: Grazing angle probability distribution function
Probability
in dB (1m2/m2) 0
-28
-26
-24
-22
-20
-18
-16
-14
-12
10
20
30
40
Grazing angle (degrees)
50
farmland woodland cities desert 60
248
transmit antenna size), the noise temperature, and the circuit losses. It was stated earlier that the capabilities should be unaected by the location of the processor, and so for simplicity it is modeled as a single module that could be a satellite, a ground station or a parallel computer formed from the ns satellites. For the performance calculations described later, account is taken of the reliability and cost implications of the dierent processor implementations. The processor module receives the inputs from the ns satellite receivers, with each input carrying as many as ns separate channels (the dierent transmitter signals). The rst operation is to calculate the far- eld power response of the sparse array. Using the two-way antenna pattern for each satellite as e(u), and the fourier transform of the satellite positions as f (u), the far- eld amplitude response is calculated from Eqn. 7.37. Squaring this gives the power-response, as a function of azimuth angle. Sample antenna patterns for 100m diameter unrestricted arrays consisting of 4, 8 or 11 satellites, each with a 2m aperture are shown in Figures 7-13{7-15. The maximum directivity at the boresight of the sparse array corresponds to an eective processing gain (after accounting for the noncoherent summing of noise) of n2 . This is multiplied by the target signal power, assuming the target is located at the boresight. The clutter also undergoes pattern ampli cation, but the gain varies across the pattern. A map of the clutter ampli cation factor (due to array processing) for each ground cell at coordinates (AZ,EL) can be calculated. Multiplying this map by the clutter power per unit area for each grazing angle, and by the calculated area of each cell, gives a map of the array-processed clutter power for all ground locations.
environments
Figure 7-12: Clutter re ectivity, o , as a function of grazing angle, for several terrains
Clutter reflectivity, σ
0 −10
Normalized power response (dB)
−20 −30 −40 −50 −60 Altitude =800 km Average range =1388 km Average 3dB Resolution =167 meters Average MDV =1 m/s PRF =1500Hz Minimum grazing angle to avoid range ambiguities =16
−70 −80 −90 −100 −0.015
−0.01
−0.005
0 sin(θ)
0.005
0.01
0.015
Figure 7-13: Far eld power response for an unrestricted minimum redundancy array; ns = 4; Dc = 100m; Ds = 2m
Recall that the ground-clutter returns also have Doppler shifts that are dependent on the positions of the clutter source relative to the radar. It is possible therefore to calculate a Doppler map giving the Doppler shifts for each ground cell at coordinates (AZ,EL). Consider now the pulse-Doppler processing. Coherent dwell time has been assumed at 50ms, which together with the PRF speci es the number of integrated pulses and hence the number of Doppler lters used by the pulse-Doppler processor. Now, the target can be assumed to have a radial velocity and associated Doppler shift that would place it's signal within the bandpass of any of these lters with an equal probability. For each lter, and each range bin (corresponding to a speci c value of EL), the clutter that competes with the signal is equal to the sum of the array-processed clutter powers for each of the ground cells that have the correct Doppler shift to pass through the lter. This includes the Doppler ambiguities. The probability distribution function of this competing clutter can also be calculated by combining all the relevant statistics for each of the variables in the calculation, correctly accounting for those that are independent and those are not. The SIR values (and statistics) at the output of the pulse-Doppler processor can now 249
0 −10
Normalized power response (dB)
−20 −30 −40 −50 −60 Altitude =800 km Average range =1388 km Average 3dB Resolution =208 meters Average MDV =1 m/s PRF =1500Hz Minimum grazing angle to avoid range ambiguities =16
−70 −80 −90 −100 −0.015
−0.01
−0.005
0 sin(θ)
0.005
0.01
0.015
Figure 7-14: Far eld power response for an unrestricted minimum redundancy array; ns = 8; Dc = 100m; Ds = 2m
be determined with Eqn. 7.21 using the calculated probability distributions for the signal power, the noise power and the clutter power. The eects of noncoherent integration are then modeled. The number of pulses (each with the calculated SIR) that can be integrated is the ratio between the total allowable dwell time for each cell and the coherent dwell time. The total allowable dwell time is estimated from the theater size, the transmit beam footprint and the required revisit rate. The resulting number of pulses integrated is such that there is just enough time to sample every ground cell in the theater within the required revisit interval. If the revisit rate is speci ed too high, the radar does not have time to visit every location even once, and probability of achieving any particular SIR is reduced linearly by the fraction of cells that cannot be addressed. With the SIR and the number of pulses to be integrated, the probability of detection can be calculated from Eqn. 7.16, for any speci c Pfa . The results presented in this chapter used a Pfa corresponding to a CFAR of 1/1000 seconds for each km2 of theater. This value was chosen during conversations with AFRL as being a reasonable starting point for 250
0 −10
Normalized power response (dB)
−20 −30 −40 −50 −60 −70 −80 −90 −100 −0.015
Altitude =800 km Average range =1388 km Average 3dB Resolution =208 meters Average MDV =1 m/s PRF =1500Hz Minimum grazing angle to avoid range ambiguities =16 −0.01
−0.005
0 sin(θ)
0.005
0.01
0.015
Figure 7-15: Far eld power response for an unrestricted minimum redundancy array; ns = 11; Dc = 100m; Ds = 2m
preliminary analysis. The availability of the calculated detection capability is de ned by the probability distribution function of the corresponding SIR.
7.4.4 Capability Results The Capability characteristics relating PD to Availability have been calculated for each architectural option in the test-matrix and with the following quality-of-service parameters:
Theater size (number of users) = 5 105km2 and 106km2 Revisit time = 60 seconds, 100 seconds, and 120 seconds False alarm rate = 1/1000 seconds for each km2 of theater The results (over 200 of them) are included in the attached Appendix. To preserve the reader's sanity, the important trends displayed by these characteristics are summarized in the following sections, organized according to the PRF, since it turns out that this has one of the largest impacts on the capabilities. 251
Capabilities of systems with PRF=1500Hz At this lowest PRF, the footprint is range unambiguous, and the dominant signal degradation comes from the large clutter returns attributable to a large number of Doppler ambiguities.
Restricted Arrays versus Unrestricted Arrays
There are no signi cant dierences in the capabilities between the Restricted and Unrestricted (Generalized) array con gurations. Basically, the dierences in the far- eld patterns cause variations in the clutter accepted in any particular Doppler lter, but average out when considered over all possible target velocities. The largest dierences amount to less than 5% variation in availability, with the Unrestricted being a little better than Restricted arrays. The reason for this small improvement is a slightly ner angular resolution that limits the main-lobe clutter.
Number of Satellites
The con gurations featuring 4 satellites have very poor capabilities, with availabilities less than 10% for all useful detection probabilities (PD > 0:5). The problems are many grating lobes and high sidelobes that amplify clutter signi cantly. Since all the architectures with ns = 4 have such poor capabilities, only a small selection of them are included in the Appendix, just to show how bad they really are. Increasing the cluster size to 8 satellites improves the capabilities to within the realms of usefulness by suppressing grating lobes and reducing sidelobe levels. The availabilities reach as high as 77% at a PD =0.5, with a 2 minute update of the small theater. This is for a system featuring small apertures (1m) and high power (400W). The largest cluster size of 11 satellites oers the highest capabilities. For the same two minute update of the small theater, the best 11 satellite clusters can support a PD =0.5 with 95% availability. This is a militarily useful capability. The architectures that achieve these capabilities involve small apertures (1m) and medium to high powers. In general increasing the number of satellites results in improved capabilities through greater sidelobe suppression (see Figures 7-13{7-15). This reduces the impact of the Doppler ambiguities, and transitions the system so that it is more evenly noise-andclutter limited.
Power and Aperture
The individual aperture size has a profound impact on the results. A smaller aperture provides quadratic increases in the search rate, and hence quadratic increases in the number of integrated pulses, but incurs only a linear penalty in the number of Doppler 252
ambiguities. Conversely, a large aperture produces dreadful results, reducing the number of pulses that can be integrated. The conclusion drawn is that a smaller aperture oers much greater PD and the best architectures in every category feature the smaller aperture size. The small apertures are compromised a little in availability at the low PD 's, since this regime corresponds to very heavy clutter backgrounds (high grazing angles) in which the extra Doppler ambiguities have an impact. The high PD regime corresponds to lower clutter powers, and hence a slower antenna roll-o is not a problem. Since the aperture area goes as the square of the diameter, the smaller apertures obviously have lower SNR's. This is not too important if the power is equal to or greater than 200W, since the detection is then clutter-limited. However, reducing the power to only 100W, while also having a small aperture, means that noise becomes relatively more signi cant (compared to clutter) and results in a noticeable (20%) drop in the availability. Because the detection is clutter-limited, increasing the power beyond 200W has a very limited impact on the capabilities.
Array Diameter
Longer baselines result in higher main-lobe resolutions, but can mean that more grating lobes fold into the pattern. This essentially produces \blind-spots" in the response, where targets cannot be detected. Other than this, which is only an issue for a few of the 8 satellite architectures, there are no penalties for spreading the cluster over the longer baselines. The only bene ts are to marginal lower the theoretical MDV, although both options provide MDV's that are lower than can be practically achieved.
Summary for PRF=1500Hz
The best option is to choose a small aperture (very important), with a high power transmitter (less important) on as many satellites as can be aorded. The array diameter does not really matter.
The candidate architectures with a PRF=1500Hz that have the best capabilities are:
8 satellites, 100m Generalized array, 1m aperture, 400W 11 satellite, 100 Generalized array, 1m aperture, 200W The Capability characteristics for each of these are shown in Figures 7-16 and 7-17. Across all values of PD , the availability is a strong function of the update rate, since high rates reduce the time available for noncoherent integration. The range in variation with update rate is indicative of the signi cance of thermal noise, since this is suppressed by integration. The overall shape of the curves is dominated by clutter eects. Note that the availability drops to very low values for high PD 's since they can be achieved only during 253
the rare circumstances when the geometry leads to low clutter returns. The value of the availability at the elbow at low PD 's is somewhat representative of the relative signi cance of the clutter to detection, since this corresponds to the worst case clutter returns. The corresponding antenna patterns for these two architectures are shown in Figure 7-18 and 7-19. These plots give the average resolution (over all likely grazing angles) to be less than 200m and MDV to be a very low 1ms,1. This last value is probably not achievable in practice since clutter motion aects (due to wind etc.) that have not been modeled begin to dominate at low Doppler frequencies.
Capabilities of systems with PRF=3000Hz With a PRF of 3000Hz, there are range ambiguities across the footprint for all aperture sizes. This has the detrimental eect of adding competing clutter to all range bins, and in practice would require some additional processing to resolve the ambiguity in target location. For this analysis, the eects of the additional clutter have been modeled, but not the problems of correct target location. Since the number of range ambiguities is proportional to the length of the footprint, it could be expected that smaller apertures are penalized by having longer footprints.
Restricted Arrays versus Unrestricted Arrays
Once again, there is no dierence in the capabilities of systems featuring Restricted or Unrestricted arrays.
Number of Satellites
The 4-satellite clusters have unacceptably low capabilities and are not discussed further. The dierences between the capabilities of the 8 and the 11 satellite clusters is not as great at 3000Hz as it was at the lower PRF. The reason for this is because the detection process is now dominated by range ambiguities, and with one-dimensional arrays, the roll-o in the response in the range direction is entirely a function of the elemental aperture size. As a result, suppressing sidelobes in azimuth by adding more satellites does not really improve the capabilities of the system. Quantitatively, the extra signal power placed on the target improves the SNR enough to give 5{10% improvement in the availability by going from 8 satellites to 11 satellites. The shape of the characteristics do not however change noticeably.
Power and Aperture
As has been suggested, the aperture size is a critical parameter for the capability of the systems in the presence of range ambiguities. The slow roll-o in the patterns for 254
Model:TS8G100hpsa. Number of users =500000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3 Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
0.9
1
0.9
1
Model:TS8G100hpsa. Number of users =1000000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3 Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
Figure 7-16: Capability Characteristics for candidate Techsat21 architecture: ns = 8; Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz
255
Model:TS11G100sa. Number of users =500000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3
Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
0.9
1
0.9
1
Model:TS11G100sa. Number of users =1000000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3
Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
Figure 7-17: Capability Characteristics for candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz
256
0 −10
Normalized power response (dB)
−20 −30 −40 −50 −60 −70 −80 −90 −100 −0.03
Altitude =800 km Average range =1388 km Average 3dB Resolution =250 meters Average MDV =1 m/s PRF =1500Hz Minimum grazing angle to avoid range ambiguities =20 −0.02
−0.01
0 sin(θ)
0.01
0.02
0.03
Figure 7-18: Far eld power response for candidate Techsat21 architecture: ns = 8; Dc = 100m; Generalized Array; P = 400W ;Ds = 1m; PRF=1500Hz
small apertures causes very large range and Doppler ambiguity problems. This has a signi cant eect on the capability characteristics, in both shape and magnitude. For low to mid values of PD , in conditions dominated by clutter returns at large grazing angles, the extra range ambiguities of 1m-aperture systems cause 10{20% losses in availability compared to 2m-aperture designs. At the high values of PD , achievable only in conditions with weaker clutter (small grazing angles), the small aperture allows a longer total dwell time and so more pulses can be integrated to improve SNR. This bene t is not enough however to outweigh the losses due to range ambiguities. The only conditions when it makes sense to have a smaller aperture is if the driving requirement is for a high search rate; under these conditions, systems with larger apertures (2m or 4m) do not have time for noncoherent integration, and their capabilities are worsened beyond that of the (already poor) capabilities of the small-aperture system. In fact, the systems with the largest aperture (4m) have this problem even at the lower search rates. Consequently, the availability supported by the systems with the 4m-apertures is almost 50% worse than mid-sized aperture systems in the useful 257
0 −10
Normalized power response (dB)
−20 −30 −40 −50 −60 −70 −80 −90 −100 −0.03
Altitude =800 km Average range =1388 km Average 3dB Resolution =250 meters Average MDV =1 m/s PRF =1500Hz Minimum grazing angle to avoid range ambiguities =20 −0.02
−0.01
0 sin(θ)
0.01
0.02
0.03
Figure 7-19: Far eld power response for a candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 200W ; Ds = 1m; PRF=1500Hz
ranges of PD . Essentially, the improvements in the SNR and the sidelobe clutter suppression does not help detection as quickly as an almost total loss of noncoherent integration hinders it. The interesting conclusion is that there appears to be an optimum aperture size for the system when using a range ambiguous PRF, and for a PRF=3000Hz, this optimum is around 2m. Transmitter power, conversely, has a very limited impact. Within the range of values that were modeled (100W,200W and 400W), each doubling of the power resulted in approximately 5% improvement in the availability at useful PD 's. This logarithmic behavior is typical of systems operating in the linear region of the SNR vs PD curves of Figure 7-2.
Array Diameter
As for the lower PRF, the array diameter has a very limited eect on the capabilities of the system. 258
Summary for PRF=3000Hz
Operating in a range ambiguous mode, the architectures with the best capabilities have mid-sized apertures and high powers. The bene ts oered by a higher power transmitter are limited and may not be worth the extra cost that it represents.
Of those modeled with a PRF=3000Hz, the architecture with the best capabilities has 11 satellites, each with a 2m aperture and 400W of transmit power. The capability characteristics for this system are shown in Figure 7-20 Comparing these characteristics with those of the best architectures at a PRF of 1500Hz, it can be seen that the higher PRF has worse capabilities. The range ambiguities are simply too damaging. For this reason, none of the architectures with a PRF of 3000Hz were carried through to the CPF part of the analysis. Notice that for even a moderate sized theater of a half-million square kilometers, none of the architectures presented can support availabilities exceeding 90% at any useful update rate (around 1 minute) for any PD greater than 0.5. These values represent something close to the minimum acceptable capabilities for a military GMTI mission, meaning that the use of one-dimensional clusters featuring minimum-redundancy arrays is probably inappropriate for an operational theater surveillance system. However, their simplicity makes them suitable for a demonstration-class mission, and their capabilities could be militarily useful for smaller-sized theaters.
7.5 The Performance, CPF and Adaptability for Techsat21 Candidate Architectures The architectures with the best capabilities are now analyzed in terms of their generalized performance and cost. In addition, the issue of how best to implement the signal processing is addressed. Performance and cost will be used as discriminators for choosing whether to deploy a single dedicated processing satellite per cluster, or to distribute the processing among the cluster satellites themselves. The rst option requires an extra satellite which must be very capable (to be able to handle the enormous processing load) and reliable (to avoid single-point failure modes). The second option adds complexity to the system, in terms of the intersatellite communication, parallelization of the algorithms and loadbalancing between the satellites, but has no single-point of failure.
7.5.1 Performance The rst step in quantifying the performance is to establish a set of system requirements so that the concept of mission failure can be de ned. These requirements represent minimum 259
Model:TS11G100hp. Number of users =500000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3
Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
0.9
1
0.9
1
Model:TS11G100hp. Number of users =1000000 1 0.9 0.8
Availability
0.7 0.6 0.5 0.4 Rate=0.0083
0.3
Rate= 0.01
0.2 Rate=0.0167
0.1 0
0
0.1
0.2
0.3
0.4
0.5 0.6 Integrity
0.7
0.8
Figure 7-20: Capability Characteristics for candidate Techsat21 architecture: ns = 11; Dc = 100m; Generalized Array; P = 400W ;Ds = 2m; PRF=3000Hz
260
acceptable values for the isolation, rate, integrity and availability of system operations in a given market. For the GMTI mission this translates into the availability at speci c values for the MDV and location accuracy of the target, the revisit rate of a theater of a given size, and the PD and FAR. Values for these requirements have been chosen based on conversations with the AFRL, and represent reasonable estimates that are appropriate for a preliminary study. These are:
Theater size = 105 square kilometers Revisit time = 1 minute MDV = 3 ms,1 , Location accuracy (resolution) = 1km PD =0.75 Availability = 90% Two candidate architectures, representing a small cluster (8 satellites) and a large cluster (11 satellites) 1. 8 satellites, Dc =1m, Pt =400W 2. 11 satellites, Dc =1m, Pt =200W These architectures are considered the \baseline" systems for evaluation, and to show that they can satisfy the requirements, their capability characteristics are reproduced in Figure 7-21. Also shown in the gure are the capabilities for modi ed versions of the 11 satellite architecture with higher and lower transmit powers. These alternatives oer dierent levels of margin over which the capabilities are exceeded and may result in increased performance or reduced costs. This will be discussed later in the Adaptability section. Note that the capability characteristics are independent of the processor implementation, and each of the candidate architectures could be deployed with either type of processor. The list of candidate architectures considered for the rest of the study is therefore: 1. 8 satellites, Dc =1m, Pt =400W 2. 11 satellites, Dc =1m, Pt =200W 3. 8 satellites, Dc =1m, Pt =400W with centralized processor 4. 11 satellites, Dc =1m, Pt =200W with centralized processor One of the important dierences between these lies in the probability of continued system operation, measured by the generalized performance. Reliability models are therefore needed for each architecture. Consider rst the systems with distributed processors. 261
1
Availability
0.95
0.9
0.85
0.8
0.75 0.5
n =8, D =1m, P =400W c s t nc=11, Ds=1m, Pt=200W n =11, D =1m, P =100W c s t nc=11, Ds=1m, Pt=400W
0.6
0.7 0.8 Integrity (PD)
0.9
1
Figure 7-21: Capability Characteristics for candidate Techsat21 architectures at a 1
minute update of a 105km2 theater; requirements are PD = 0:75, Availability=0.9
To properly treat the problem of distributing detection-processing across a satellite cluster is a worthy subject for its own doctoral thesis, and involves the elds of computer science, antenna theory, signal theory and of course space systems engineering. We cannot hope do justice to this problem within the con nes of this chapter, and after all, the goal here is to demonstrate the application of GINA for design. All that is really needed is an approximation of its impact on the performance and cost. An assumption is therefore made that the technology exists to perform the processing, whether distributed or on a dedicated satellite, and that the reliability of the processor itself is unity. For comparing between architectures, the assumption on the reliability is equivalent to an assumption that the processing is equally reliable (but less than unity) in either case, and unity is just more convenient. Equating the reliabilities of the processors is justi able because they essentially have to perform the same functionality. The dedicated processor will have to be implemented as a parallel computer anyway, because the load is too high for any envisioned single processor. The only dierence then is in the networking. Well, both architectures rely on intersatellite links, and the additional connectivity required of the distributed processor on the one hand adds complexity, and on the other, redundancy. These compensate each other, and the net result is that, at least to rst order, the reliability of the processing 262
is independent of its implementation. The assumption then is that the performance is dominated by satellite failures. Satellite failures are modeled to occur at a constant failure rate, calculated from the failure rates of the important subsystems. Each cluster satellite is modeled as comprising a structural bus module, a propulsion system, a communications payload (for connectivity) and a \special payload" representing the radar package. Using data from SMAD [3] and assuming one in 10 failures result in a satellite loss, the equivalent satellite failure rate is approximated as s = 0:026 per year. The resulting state probabilities for dierent numbers of satellite failures for the 8 satellite cluster are shown in Figure 7-22. 1 No failures
Probability
0.9
1 failure
0.8
2 failures 3 failures
0.7
4 failures 5 failures
0.6
6 failures 7 failures
0.5 0.4 0.3 0.2 0.1 0 0
2
4
6
8
10
Years
Figure 7-22: The state probabilities for dierent numbers of satellite failures in the 8 satellite cluster; s = 0:026 Mission failures occur if the cluster capabilities degrade, through satellite failures, to the point that requirements can no longer be satis ed. Analysis showed that the 8 satellite cluster cannot tolerate a single satellite failure if it is to satisfy the requirements given above. Therefore, the performance of the 8 satellite cluster is given by the curve corresponding to zero satellite failures in Figure 7-22, resulting in a value of only about 13% after 10 years, which is too low for most military applications. Of course, the assumed failure rates are not particularly accurate, but the real factor that drives the performance is that all 8 satellites must work for the system to satisfy requirements. This would force a scheduling of regular replenishment launches, to maintain the performance at higher levels. This will be captured later in the lifetime cost calculations. 263
Increasing the number of satellites to 11 improves things a great deal. Provided the satellites have at least 200W of power, as many as 3 failures can be tolerated by recon guring the array after each failure. The resulting performance curves are plotted in Figure 7-23, showing that the performance can be increased to around 65%. Also shown in this gure are the performance curves for the architectures featuring centralized processing. They are worse than the corresponding distributed processing options because the centralized processing satellite adds an additional mission failure mode, that being the single point failure of its satellite bus. In producing these curves, the failure rate for the extra processing satellite has been modeled as the same as the cluster satellites. 1 0.9 0.8
Probability
0.7 0.6 0.5 0.4 0.3 8 sats 11 sats 11 sats, low power 8 sats +centralized processor 11 sats + centralized processor
0.2 0.1 0 0
2
4
6
8
10
Year
Figure 7-23: The generalized performance of the dierent architectures subject to re-
quirements for a 1 minute update of a 105km2 theater with PD = 0:75 and Availability=0.9
7.5.2 The CPF Metric and the System Lifetime Cost The CPF metric for a military search radar system is the cost per protected square kilometer, where \protected" indicates a compliance with the detection requirements. The total system lifetime cost, used to calculate the CPF, accounts for the baseline costs of building and launching the satellites, and also the failure compensation costs needed in the event of a violation of system requirements. In this way, the system lifetime cost captures the performance of the system. Furthermore, since all the systems being considered in this study address the same number of square kilometers over the same lifetime with the same 264
requirements, the CPF is actually nothing more than a scaled version of system lifetime cost. Since the lifetime costs are easier to comprehend than the CPF's, that have very small absolute values, they will be used as surrogates for the CPF. The system lifetime cost can be estimated using simple cost models and the failure probability pro les of the last section. The models assume a three year cost spreading, with the rst launch in 2004 and IOC in 2005. The system is assumed active through the year 2014. The baseline cost is the sum of the satellite costs, and launch and insurance costs. The total satellite costs increase with the number of satellites per cluster and the number of clusters. Each of the systems considered is assumed to be deployed with 48 separate clusters in polar orbits to achieve revisits to a theater at approximately 30 minute intervals. Version 8.0 of the Aerospace Corporation Small Satellite Cost Model [13] is used to calculate the TFU satellite bus cost as, 0:1599,0:356 Csat = 6:47PEOL
(7.40)
where PEOL is the end-of-life payload power, conservatively assumed to be twice the RF power of the transmitter, and is the pointing requirement. For Techsat21, AFRL have established a value of 2o as the pointing requirement, allowing gravity gradient stabilization. To this bus cost must be added an estimate of the payload cost. For Techsat21 featuring distributed processing, the payload cost of each satellite is assumed to be dominated by the processors, since these represent the most advanced components. The cost of the satellite processors scales with the number of oating point operations per second (FLOPS). For the Techsat21 concept, the total number of FLOPS can be estimated as the sum of the array processing load and the pulse-Doppler processing load. To form each beam, the array processing involves summing n2s signals that are bandlimited to 15MHz and (at least) Nyquist sampled. From the ratio of the antenna pattern roll-o to the maximum resolution, the number of simultaneous beams is approximately Dc =Ds , and so an estimate for the array processing load for the entire cluster is,
Dc FLOPSarray proc = 2 15MHz n2s D
s
(7.41)
The pulse-Doppler processing is implemented with an np -point FFT that involves approximately np log2(np ) operations for each coherent dwell period of length (np =PRF ). Accounting for all range bins, each pulse is actually a digitized 15MHz signal of duration (1/PRF), and so the total number of FLOPS involved in the pulse Doppler processing for the entire cluster is, 265
FLOPSp-D proc = 2 15MHz (1=PRF ) np log2 (np ) (PRF=np ) Dc c = 2 15MHz log2 (np ) D Ds
Ds
(7.42)
For Techsat21 at a PRF of 1500Hz, np 64 and so n2s >> log2 (np ). Consequently the array processing dominates, and the pulse-Doppler processing can be neglected. The total processing load is therefore estimated at:
11.5 Giga-FLOPS for the 8 satellite cluster 21.8 Giga-FLOPS for the 11 satellite cluster For equal load-balancing, each satellite is accountable for an equal share of this processing. The payload cost per satellite can then be estimated by scaling processing costs from conventional military satellite programs. Canavan [18] states that the cost density for processing was approximately $0.001 per FLOP in 1996. Assuming this price halves every two years gives a cost density of $8:84 10,4 in 2003. For the cases featuring centralized processing, the payload cost of the cluster satellite are assumed to cost a xed $0.5M, and the above scalings are used to calculate the payload cost of the processing satellite, noting that it is responsible for the entire processing load. The sum of the payload cost and the bus cost de nes the TFU, and the non-recurring costs are estimated as being four times this value. The recurring costs assume a learning curve discount of 15% over the entire production run. Launch is modeled as costing $8000 per kg of wet mass, and the satellites and dispensers are conservatively assumed to weigh 200kg. Insurance costs are 20% of the satellite and launch vehicle costs. The resulting baseline costs, in xed year FY94$ for the baseline architectures, with and without centralized processing, are given in Tables 7.4{7.7. The failure compensation costs are the expected costs required to build and launch any replacement satellites in the event of a violation of system requirements. These are calculated by an expected value calculation, using the state probability curves similar to Figure 7-22 and the average satellite costs. Finally, the lifetime costs are the sum of the baseline costs and the expected failure compensation, and discounted back to 2002 at 12% per year to account for the time value of the money. This is labeled as NPV Cost in the tables, indicating that it is the result of a Net Present Value calculation. 266
Table 7.4: System lifetime costs for Architecture 1 (8 sats)
Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M) 2002 429.86 0.00 429.86 2003 859.73 0.00 767.61 2004 1147.83 0.00 915.04 2005 717.97 67.55 559.11 2006 0.00 62.39 39.65 2007 0.00 57.54 32.65 2008 0.00 52.98 26.84 2009 0.00 48.68 22.02 2010 0.00 44.66 18.04 2011 0.00 40.87 14.74 2012 0.00 37.31 12.01 2013 0.00 33.98 9.77 2014 0.00 30.85 7.92
Table 7.5: System lifetime costs for Architecture 2 (11 sats)
Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M) 2002 520.27 0.00 520.27 2003 1040.54 0.00 929.05 2004 1228.62 0.00 979.45 2005 708.35 0.09 504.26 2006 0.00 1.18 0.75 2007 0.00 4.10 2.33 2008 0.00 8.76 4.44 2009 0.00 14.58 6.60 2010 0.00 20.89 8.44 2011 0.00 27.06 9.76 2012 0.00 32.62 10.50 2013 0.00 37.27 10.71 2014 0.00 40.83 10.48
7.5.3 Lifetime Cost Results The system lifetime costs for the baseline architectures with distributed or centralized processing are plotted in Figure 7-24. The costs for all the systems are reasonable for a mission of this type, and are comparable to the projected cost of the \Discoverer-II" system [47] proposed by DARPA, the Air Force and the National Reconnaissance Oce to address a similar mission. However, the absolute values of the costs are of less interest than the relative trends. Note that the cheapest system is the 8 satellite architecture with distributed processing, followed closely by the 11 satellite architecture, again with distributed processing. This means that it is marginally cheaper to deploy the lower performance system and maintain operations through regular replenishment, rather than build the reliability into the system up front. However, the small relative dierence in cost between these two options is prob267
Table 7.6: System lifetime costs for Architecture 3 (8 sats, Centralized Processor) Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M) 2002 476.42 0.00 476.42 2003 952.84 0.00 850.75 2004 1322.27 0.00 1054.10 2005 845.85 78.25 657.76 2006 0.00 73.09 46.45 2007 0.00 68.22 38.71 2008 0.00 63.63 32.24 2009 0.00 59.29 26.82 2010 0.00 55.22 22.30 2011 0.00 51.36 18.52 2012 0.00 47.73 15.37 2013 0.00 44.32 12.74 2014 0.00 41.10 10.55
Table 7.7: System lifetime costs for Architecture 4 (11 sats, Centralized Processor) Year Baseline costs ($M) Failure comp. costs ($M) NPV costs ($M) 2002 616.24 0.00 616.24 2003 1232.48 0.00 1100.43 2004 1466.51 0.00 1169.10 2005 850.27 25.08 623.06 2006 0.00 25.39 16.14 2007 0.00 27.33 15.51 2008 0.00 30.81 15.61 2009 0.00 35.32 15.98 2010 0.00 40.26 16.26 2011 0.00 45.10 16.26 2012 0.00 49.43 15.91 2013 0.00 52.97 15.23 2014 0.00 55.57 14.26
ably within the uncertainty of the cost model, suggesting that both archtectures (8 or 11 satellites) have approximately the same lifetime cost. Furthermore, the results presented here do not capture the eects of down-time that would follow the failure of a single satellite from the smaller cluster. Since continuity of service is critical for military systems used in war-time operations, this pretty much invalidates the use of the 8 satellite architecture. Adding a centralized processing satellite results in higher costs, due to a combination of higher spending during initial deployment, and increased failure compensation. Essentially, all the reliability bene ts of a distributed sensor (path redundancy, recon gurability, etc.) have been lost by adding a single point of failure in the data processing. Based just on these results, it does not seem sensible to implement centralized processing. 268
4.00
4.50
2.86
3.00
3.26
8 sat, High power, Small aperture 11 sat, Small aperture 8 sat, High power, Small aperture & Processor 11 sat, Small aperture & Processor Staged Deployed Processor 3.65
3.96
269
Since the 11 satellite con guration has some margin in capabilities, it is possible to ascertain its performance and lifetime cost under more stringent system requirements. In this way, the sensitivity of the system cost to mission requirements can be measured. Alternatively, for the same requirements, the system may be `'down-sized" by reducing the transmitter power to 100W in an attempt to reduce costs while still meeting the mission goals. To be complete, it is also worth considering adding even more power to provide increased
7.5.4 Adaptability
Proponents of the centralized processor concept for distributed satellite systems have claimed that one potential bene t is for easy upgrading of capabilities. The rationale is that of all satellite technologies, the fastest advances are occurring in the eld of computing. As a result, it may make sense to deploy the processor separately, supporting an easy upgrade as new and improved computers are developed. It had also been assumed that since the staged deployment of the upgrade processor occurred later in the lifetime of projects, the present value of the cost of the upgrade is low. Techsat21 provided an excellent test case for this concept. A two-stage deployment of the system was investigated, in which an 8satellite architecture with centralized processor is deployed rst, followed 5 years later by a new processor satellite and an additional three receiver satellites. The results show that although these augmentations assist failure compensation and improve the capabilities, the net aect is to increase the system lifetime cost, as shown in Figure 7-24.
Staged deployment of the centralized processor
requirements for a 1 minute update of a 105 km2 theater with PD = 0:75 and Availability=0.9
Figure 7-24: The system lifetime cost of dierent Techsat21 architectures subject to
2.00
2.50
3.00
3.50
Lifetime cost ($B)
performance. This is a good idea if the resulting reductions in the failure compensation costs outweigh the increases in the baseline costs. All these issues can be addressed with the Adaptability metrics.
Requirement Elasticities for the 11-satellite Techsat21 cluster As de ned in Chapter 4, the requirement elasticities of the CPF at a given design point are,
=CPF Isolation Elasticity, EIs = CPF Is=Is =CPF Rate Elasticity, ER = CPF R=R =CPF Integrity Elasticity, EI = CPF I=I CPF =CPF Availability Elasticity, EAv = Av=Av
(7.43) (7.44) (7.45) (7.46)
Recall that these represent the percent change in the CPF (in this case lifetime cost) in response to a 1% change in the requirement values. For the Techsat21 space-based radar, the system requirements that most in uence the system lifetime cost are the rate, integrity and availability speci cations. The isolation requirement, equivalent to the MDV or the ground resolution, is related to the array con guration (spacing) and as such, is not immediately impacting on the system cost. Also, while the rate requirement most de nitely has a profound impact on the design of feasible architectures, the designer himself has very little
exibility in choosing a rate requirement, since it is implicitly related to the dynamics of the targets. However, the integrity and the availability requirements are more tradeable, in that they are related to the customer's perceptions of quality. The probability of detection requirement is derived from the real mission requirement that the system be able to maintain a target track. The actual value of PD that is needed to do this eectively depends on the algorithms used to analyze the detection data. A slight change in PD may have a small impact on the overall success of the mission, but can have a huge impact on the lifetime cost of the system. Also, although it is unlikely that the DoD approve a theater surveillance system with an availability lower than 90%, there is a very real possibility that decisionmakers in the DoD arbitrarily choose a value higher than this as the requirement. Now realize that in many cases, the value chosen is not the result of extensive research to nd the value below which military operations are compromised; rather, the requirement is chosen ad hoc based on judgement and politics. It is useful then, to quantify the nancial impact of changing the requirements on PD and availability. The 11-satellite cluster can support higher values for both PD and availability, making it exible to changes in the system requirements. However, the performance changes, and 270
consequently the failure compensation increases so that there is a measurable impact on the lifetime cost. Consider a change in the PD requirement from 0.75 to 0.9. Although the 11 satellite cluster can satisfy the requirements with a full complement of satellites, if two satellites fail, the resulting 9 satellite cluster cannot satisfy this new PD requirement. The performance of the system is therefore lower than for the previous requirement, and this results in increased failure compensation costs. Increasing the required availability to 95% has a similar impact. To quantify these eects, the corresponding requirement elasticities of the lifetime cost are plotted in Figure 7-25. 1.000
Elasticity
0.800
0.600
0.400
0.200
0.000
E(Pd)
E(Av)
E(Power)
0.250
0.899
0.029
Figure 7-25: Elasticities of the lifetime cost for the 11 satellite cluster; PD : 0:75
Availability: 0:9 0:95 ; Pt : 200W !
!
100W
!
0:9;
The fact that both elasticities are positive in sign is to be expected, since an increase in either requirement leads to increased costs, through failure compensation. The magnitude of the elasticities show that the availability requirement is a much larger cost driver than the PD . In fact, the availability elasticity is almost unity (0.9), and so even small increases in the availability requirement result in measurable cost increases, whereas the cost is reasonably insensitive to changes in the PD . This result emphasizes the importance of correctly specifying the requirements during system de nition; whereas engineers would likely specify the PD requirement, based on knowledge of what is needed to construct target tracks, the availability requirement, which has a much larger impact on the system cost, is most likely chosen by high ranking decision-makers with little appreciation for the impact of their choice. If the Techsat21 system is to be deployed cost-eectively, the engineers and 271
the military planners must work together in de ning the availability requirement, based on what is really needed for military utility, and little or no more.
Technology (Power) Elasticity for Techsat21 cluster Before evaluating the technology-elasticity metric corresponding to modi cations in the transmitter power, valuable insight can be gained from considering just the performance implications of these changes. Referring to the performance pro les plotted in Figure 7-23, the 11-satellite baseline system bene ts from being compliant with requirements even after a loss of 3 satellites. This would correspond to the 8-satellite cluster after recon guration. It was already stated that the 8 satellite cluster cannot tolerate a single satellite failure, even with 400W of transmit power. Therefore, the 11 satellite cluster with Pt = 400W can tolerate the same number of satellite failures as the baseline system that has Pt = 200W . Their performances will be equal and so there is no bene t in deploying the larger power system since it will assuredly cost more to build and launch. Conversely, if the satellites have only 100W of transmit power, there is less margin for error, and 2 satellite failures result in a violation of the requirements. This reduces the performance to approximately 25% over 10 years, but also reduces the system cost. Now, there is an engineering trade to be made, and the elasticity metric can be used to guide the decision. Speci cally, a positive value for the power-elasticity indicates that it is cost eective to deploy less power than the nominal 200W, since it would imply that a reduction in power results in a reduction of cost. Also the absolute value of the elasticity measures the relative importance of this decision on the overall system cost. In actual fact, the power elasticity is positive but is almost zero, as shown in Figure 7-25. This means that although the baseline cost is less with the smaller power transmitters, this change is almost exactly counteracted by increases in the failure compensation needed to maintain operations. There is therefore only a very slight bene t in deploying smaller power transmitters. It is probably more prudent to accept the slightly higher costs of the 200W system, since it buys extra margin reducing the probability of system downtime.
7.5.5 Conclusions of Design Trades The generalized analysis of the Techsat21 concept has uncovered some very interesting trends:
One dimensional spacecraft clusters using minimum-redundancy arrays are feasible for a GMTI mission, provided that at least 8 satellites are deployed. The capabilities of 272
these systems are adequate for a demonstration program, but probably not sucient for a operational system.
At the chosen operating altitude of 800km, a range-unambiguous PRF of 1500Hz gives the best capabilities. Higher PRF's suer from high clutter return from the ambiguities.
At this PRF, each satellite should be equipped with a small antenna, to improve the
search rate, and enough transmitter power to satisfy SNR constraints, accounting for the n2s processing gain. For an 8 satellite cluster, this corresponds to an aperture of approximately 1m2 and a power of 400W. For an 11 satellite cluster with apertures of 1m2 , the power can be as low as 100W for reasonable capabilities.
Both the 8 satellite cluster and the 11 satellite cluster can satisfy requirements for 90% availability in searching a 105km2 theater within 1 minute, at a PD = 0:75 and a FAR of 1000 seconds per square km. Both systems can support MDV's as low as 1ms,1 and location accuracies of 100m on the ground.
Extra margin in the capability is useful for improving performance, since it allows
satellite failures to occur without violating requirements. The performance of the 11 satellite system is almost 65% over 10 years for the given system requirements.
The most cost eective option, accounting for all the eects of failure compensation, is the 8 satellite cluster with distributed processing. The most prudent (and only marginally more expensive) is the 11 satellite cluster.
A centralized processor is a bad idea for Techsat21, incurring high costs and low performance.
The 11 satellite cluster is insensitive to changes in the PD requirement, but very sensitive to changes in the availability requirement. Consequently, the availability requirement should be set very carefully to match what is actually needed operationally. Reducing the power on the 11 satellite cluster does not really reduce costs since the savings in the baseline cost is counteracted by high failure compensation costs.
7.6 Summary The Techsat21 concept is very dicult to understand without a great deal of prior experience in radar, antenna theory, signal processing and orbital mechanics. The analysis presented here, covering around 60 pages, is nothing more than a rst-cut at the problem. 273
Nevertheless, some interesting trends have been discovered, which will guide the next level of design. Most notably, it appears that one-dimensional minimum-redundancy arrays cannot provide sucient capability to be used as the sole asset for GMTI theater surveillance. This statement must however be quali ed, lest the reader misunderstands its implications. The conclusion is based on the analysis presented in this chapter, and as such is sensitive to the assumptions that were made to simplify the model. The most important of these is the clutter-processing model, that was based on simple pulse-Doppler radar techniques. In a real system, additional levels of clutter rejection and suppression could be implemented, improving the capabilities beyond that predicted. However, if complicated clutter rejection and suppression techniques are required, the total processing load (including the array processing) would become very prohibitive for small satellite platforms. However, there are possibilities that have not yet been explored. Preliminary results (not presented) suggests that the use of arrays in which the ratio of the element spacings is prime oers a de nite potential for improvement over the minimum-redundancy arrays. Furthermore, if the encoded-pulse idea is proved workable, high PRF's around 10kHz give marked improvements in the capabilities even for 8-satellite minimum-redundancy clusters. This work is in progress at AFRL. The most important thing to realize though, is that the inadequacies of the one-dimensional clusters are not characteristic of two-dimensional clusters. In particular, clutter returns are suppressed in both range and azimuth by the low pattern ampli cation in the sidelobes of the sparse array. This reduces the total clutter power entering each receive-beam by orders of magnitude (approximately 10dB for a 4 4 randomly distributed cluster) compared to one-dimensional arrays with the same number of satellites. This allows high PRF's to be used, further reducing the Doppler ambiguities and clutter. It is almost certainly true that the capabilities and performance of two-dimensional clusters will be signi cantly better than the one-dimensional architectures studied here. As a result, future work should address the application of two-dimensional arrays for Techsat21. In concluding discussion about Techsat21, it must be emphasized that the concept represents an extraordinarily elegant approach for performing an GMTI mission. Recall that the most dicult problem for a GMTI system is an isolation issue: how to isolate slow moving targets from stationary clutter. Techsat21 attacks this problem not with massive amounts of processing to correct for poorly suited sensing, as is the approach taken by Discoverer-II, but instead asks Mother Nature to work in its favor. By spreading out its apertures, it implicitly improves its ability to isolate signals from dierent locations since the dierences in the arrival phase of the signals will be increased. This makes the isolation task a whole lot easier, and decouples it from the now trivial detection process. 274
This chapter was intended as a demonstration of the GINA methodology for a realistic design study, and the Techsat21 space-based radar provided a challenging example. As has been stressed many times, the level of complication in this design is intimidating. However, by considering only what is actually important to the mission, in terms of the generalized quality-of-service parameters that de ne the capabilities, and by decomposing the system into only those functional modules that aect these capability parameters, a reasonably simple model was constructed. This yielded signi cant results in predicting capabilities, showed important trends relating these capability characteristics to changes in the system parameters, identi ed architectures that are well suited to the mission while eliminating architectures that are not eective, and allowed the cost drivers to be identi ed. This is precisely the conceptual design process.
275
276
Chapter 8
Conclusions and Recommendations The goal of this research was to develop a systematic approach to analyze modern satellite systems having any architecture, distributed or monolithic, for any likely mission. A generalized analysis methodology for satellite systems has been developed, and it can be used for the analysis of any space system architecture addressing any mission in communications, sensing or navigation. The generalization is possible because, for each of these applications, the overall mission objective is to transfer information between remote locations, and to do so eectively and economically. The analysis methodology is therefore a hybrid of information network ow analysis, signal and antenna theory, space systems engineering and econometrics. The most important concepts of the Generalized Information Network Analysis (GINA) can be summarized:
Satellite systems are information transfer systems
All current satellite systems essentially perform the task of collection and dissemination of information.
Information transfer systems serve O-D markets
These markets are de ned by a set of origin-destination pairs, and speci c information symbols that must be transferred between them.
Satellites and ground stations are nodes in a network
Information must ow through the nodes, to connect the O-D pairs that de ne the market. At any instant, the network is de ned only by its operational components, and so all networks are assumed to be instantaneously failure-free. Should a component fail, the network changes by the removal of that component. 277
The capabilities of the system are characterized by the isolation, rate, integrity and availability parameters
{ Isolation characterizes the system's ability to isolate and identify the signals from dierent sources within the eld of view.
{ Information Rate measures the rate at which the system transfers information
symbols between each O-D pair. Information must be sampled at a rate that matches the dynamics of the source or end-user.
{ Integrity measures the error performance of the system, characterizing the probability of making an error in the interpretation of a symbol.
{ Availability is the instantaneous probability that information symbols are being
transferred through the network between known and identi ed O-D pairs at a given rate and integrity. It is a measure of the mean and variance of the other capability parameters. It is not a statement about component reliabilities.
Each market has associated requirements on isolation, rate, integrity and availability
Users of the system are satis ed only when information transfers occur that are compliant with these requirements. Therefore, these are the functional requirements placed on the system. A network satisfying these requirements is deemed operational.
Performance is defined relative to mission requirements
The performance of a system within a given market is the probability that the system instantaneously satis es the top-level functional requirements. This is simply the probability of being in an operational state. It is here that component reliabilities make an impact.
The Cost per Function metric
This is a measure of the average cost to provide a satisfactory level of service to a single O-D pair within a de ned market. The metric amortizes the total lifetime system cost over all satis ed users of the system during its life. The lifetime system cost includes the baseline cost and the expected failure compensation costs. Baseline costs account for the design, construction, launch and operation of the system. The failure compensation costs represent expenditure necessary to compensate for any failures that cause a violation of requirements. Since the likelihood of failure is the compliment of the generalized performance, it is through the failure compensation costs that performance impacts the CPF metric. 278
The number of satis ed users is determined by the capability characteristics of the system and by market eects. The system capabilities de ne the maximum number of users that can be supported at the required rate, integrity and availability. The number of satis ed users is the smaller of the supportable capacity and the size of the local market.
The Adaptability metrics
These measure how sensitive a system is to changes in the requirements, component technologies, operational procedures or even the design mission.
{ Type 1 adaptabilities are the elasticities of the CPF with respect to realistic
changes in the system requirements or component technologies. This allows the system drivers to be identi ed, and can be used in comparative analyses between candidate architectures. { Type 2 adaptability measures the exibility of an architecture for performing a dierent mission, or at least an augmented mission set. The bene ts of the GINA methodology are several-fold. First of all, it is completely compatible with, and supportive of, formal SE practices. By being based on a functional decomposition of the real architecture, GINA builds upon basic functional analysis, adding unambiguous, objective quanti cation to predict capabilities, performance, cost and risk. In addition, the CPF and the Adaptability metrics support comparative analyses between competing systems with large architectural dierences. GINA bases judgment on how well a system addresses a de ned market and scales the cost accordingly. In this way, very large and ambitious systems can be fairly compared to smaller, more conservative systems. The mathematical form of the elasticities (Adaptability metrics) are identical to the conventional elasticities used in econometric analysis. This allows the quantitative results of systems engineering analyses to be integrated with nancial analyses so that they may be applied in forming business-cases for satellite programs, or can be used in the investment decision-making process. The formalism of the GINA framework has been used to obtain signi cant quantitative and qualitative results for a variety of applications and has allowed a comprehensive characterization of satellite systems in general. The most important of these classi cations relate to distributed satellite systems. A distributed satellite system is de ned as any system that uses more than a single satellite to address the market, and the cluster size speci es how many satellites are in view of a common ground location. The categories of distribution are based on the network topology: (1) Collaborative systems feature parallel uncoupled paths through the network, from source to sink; and (2) Symbiotic systems feature interconnected 279
paths through several satellites before arrival at the sink. Posed within the GINA framework, and organized using the generalized classi cations, the bene ts oered by distributed systems are easily appreciated. Summarizing just the most signi cant of these:
Symbiotic architectures oer greatly improved isolation by separating sensors over large cluster baselines, thus exploiting a dierent collection geometry to separate and identify dierent signals in phase or frequency. Collaborative clusters do not improve isolation capabilities.
Distribution oers improvements in rate and integrity due to the ergodic property
of noise such that integrating over several collectors is equivalent to integrating over time, but incurs no penalty in rate. A higher net rate of information transfer is possible with collaborative clusters by combining the capacities of several satellites in order to satisfy the local and global demand. This is simply a result of task division. Signal to noise ratios can be improved linearly with collaborative clusters, through task division, and super-linearly (quadratically or cubically) with symbiotic clusters, through coherent integration. Both of these eects give exponential improvements in integrity compared to singular deployments.
Distributed systems can exhibit higher availabilities through a reduced variance in the
coverage of target regions. This reduces the need to \overdesign" and provides more opportunities for a favorable viewing geometry.
A staged deployment of space assets, matched to the development of the market, can eectively lower the baseline costs of distributed systems compared to monolithic designs, due to the time value of money, and a reduced level of nancial risk.
Distributed systems require only fractional levels of redundancy to be deployed in
order to achieve high reliabilities. Thus only marginal increases in the up-front costs are needed to gain large savings in the failure compensation costs. More importantly, due to the separation of important system components among many satellites, only those components that break need replacement. This greatly reduces the failure compensation costs.
Based on these arguments, deduced from organized, qualitative analysis using the GINA methodology, it would appear that the potential oered by distributed systems is very great, and their further development is strongly encouraged. These signi cant conclusions, which are straightforward and easily understood, where made possible by the structured approach provided by GINA. 280
Qualitative analysis is however, insucient for the SE process. To demonstrate the applicability of GINA for quantitative analysis, and to prove the claim of generality, the GINA methodology has been applied to three detailed case studies, covering a range of applications. Validation of the technique was provided by analysis of the NAVSTAR Global Positioning System. The system was modeled to comprise modules for the satellite navigation payload, the downlink transmitters, the eects of free-space loss, satellite visibility, coverage geometry, multiple-access interference, and the user receiver functions. Inputs to the model were based on simulations of the constellation, or on measured statistics. The 50th-percentile capabilities of GPS calculated using GINA agree to within 3% of the measured 50th-percentile capabilities. This is an excellent result, providing an \acid-test" for the validity of the GINA approach. The generalized analysis also suggests that the GPS architecture is extremely robust, with the navigation accuracy degrading by only a few meters after two or three satellite failures. Since the original system requirements (16m position accuracy at the 50th-percentile) are easily satis ed by the current constellation, this degradation is insucient to cause system failure, and the generalized performance is very high. Augmenting the system with an additional three satellites, placed in GEO, adds even greater performance. For the augmented system, military users could achieve 16m position accuracy with 90% availability, even after 6 satellite failures. This corresponds to a performance of approximately 100%. A comparative analysis of three proposed broadband satellite systems demonstrated the utility of GINA for a competitive assessment of commercial viability. Models were constructed for Cyberstar, Spaceway and Celestri based on the designs listed in their FCC lings. The most important results of the study are summarized below:
Cyberstar, as it appears in the ling, is unsuited for providing broadband communications at rates higher than 386Kbit/s, while Spaceway and Celestri will be able to support high rate (T1) services with high levels of integrity (BER 10,10) and availabilities exceeding 97%.
The cost per billable T1-minute metric is the metric used to compare the potential
for commercial success of each system. It is the cost per billable T1-minute that the company must recover from customers through fees in order to achieve a 30% internal rate of return. Assuming improvements are made in Cyberstar so that it may compete in this market, the calculated cost per billable T1-minute metrics for all the systems are between $0.15 and $0.25, implying that all three systems will be able to oer competitively priced services to users. Celestri has a slight competitive advantage since it supports the lowest cost per billable T1-minute, and also has the 281
smallest variation across market models.
Deployment strategies and market capture have a larger impact on commercial success
than architecture. The dierence in the cost per billable T1-minute between the GEO and the LEO architectures is not as large as the dierences due to a more ecient deployment strategy that is tailored to match the development of the market. Increasing the size of the market captured means that the high xed costs can be amortized over more paying customers.
Contrary to popular belief, lower launch costs are not as eective for commercial bene t as lower manufacturing cost.
For smaller systems such as Cyberstar, oering lower rate services at discounted rates
oers the potential for larger revenues, through increased yield. This is not the case for Celestri, that maximizes revenue by providing high-rate services. Basically, there are not enough paying customers in the market to eciently utilize the resources of Celestri at low data rates. In the nal case study, GINA was applied to a preliminary design of TechSat21, a distributed space-based radar concept. Techsat21 features symbiotic clusters of small satellites (approximately 100 kg) that y in close formation, creating sparse arrays to detect ground moving targets in a theater of interest. The GINA methodology allowed the capabilities to be predicted, accounting for the eects of coverage variations, clutter and noise power, and most importantly, the sparse aperture synthesis. The results are signi cant: One-dimensional minimum-redundancy arrays provide insucient capability to be used as the sole asset for GMTI theater surveillance, unless additional levels of (unmodeled) clutter processing are implemented. There are some other options for onedimensional sparse arrays, that have not been explored, and they may oer improvement over the minimum-redundancy arrays. Furthermore, it would appear qualitatively that two-dimensional clusters oer signi cant potential. It should be noted that the capabilities of the one-dimensional arrays are de nitely within the bounds of an eective concept demonstrator, and since the cluster can be augmented and recon gured, traceability to an operational system is guaranteed. The ecacy of the Techsat21 concept will thus be con rmed.
Small apertures and high powers are preferable at low PRF's, while larger apertures are needed at higher (ambiguous) PRF's. The smaller apertures have a large FOV and allow more rapid searching of the theater. However, at higher PRF's, a wide FOV gives rise to range ambiguities and the clutter returns in the range ambiguities hinder detection. 282
At an operating altitude of 800km, a range-unambiguous PRF of 1500Hz gives the
best capabilities. At this PRF, each satellite should be equipped with the a small antenna and enough transmitter power to satisfy SNR constraints, accounting for an n2s processing gain. For an 8 satellite cluster, this corresponds to an aperture of approximately 1m2 and a power of 400W. For an 11 satellite cluster with apertures of 1m2 , the power can be as low as 100W for reasonable capabilities.
Both the 8 satellite cluster and the 11 satellite cluster can satisfy requirements for 90% availability in searching a 105km2 theater within 1 minute, at a PD = 0:75 and a FAR of 1000 seconds per square km. Both systems can support MDV's as low as 1ms,1 and location accuracies of 100m on the ground.
Based on cost-eectiveness and performance, centralized processing on a dedicated
satellite is a bad idea for Techsat21. The additional satellite adds to baseline costs and the single-point of failure reduces the performance and increases the failure compensation costs
The system is almost insensitive to changes in the probability of detection (PD ) re-
quirement, but very sensitive to changes in the availability requirement. It is important therefore to set the availability requirement very carefully to represent the real operational needs of the mission. This necessitates signi cant collaboration from engineers and policy-makers, to balance what is needed with what can be aorded.
These case studies show how the generalized analysis methodology has real application in the SE process, since signi cant results were produced in a very short period of time (the case studies were all conducted within a 2 month period). By standardizing the representation of the overall mission objective, in terms of the generalized quality of service parameters, GINA organizes, prioritizes and focuses the engineering eort expended in satellite system analysis. The formal methodology means that the analysis of a new mission is straightforward once the mission parameters have been mapped into the GINA framework. Following the procedures described in this thesis should produce quantitative, relevant and meaningful results. Finally, a word of caution. GINA is not an excuse for ignorance or carelessness. Successful application of GINA is contingent upon a solid understanding of the system and the mission to which it is addressed. Although it may give the recipe, it neither bakes the cake, nor teaches the cook.
8.0.1 Recommendations The procedures described in Chapter 4 of this thesis are the product of over two and a half years of careful thought in re ning concepts and eliminating irrelevant features, 283
so that the methodology is focused on only what is really important. Clearly the main contribution of the research is the development of this methodology, and in that, there are no recommendations for improvements par se (if I knew of any short-comings, I would have surely corrected them before writing the thesis). The main recommendation then, is to further validate the GINA methodology through repeated application to more missions and satellite systems. Only through continued use can GINA gain acceptance into the toolbox of the space systems engineer. Some speci c examples of where GINA should be applied, representing extensions of the case studies presented in the thesis, are given below: 1. GINA should be applied for a comparative analyses of the mobile communication satellite systems that will soon begin oering service. This should assist validation, and will provide an opportunity to demonstrate GINA for near real-time strategic planning. As the market develops, and the satellites become aged, the models can be updated and re ned. This would allow suggestions to be made regarding constellation replenishment and augmentation, in order to optimize the cost per billable voicecircuit minute. 2. GINA should be used for an analysis of the Discoverer-II space based radar, so that it may be fairly compared to Techsat21. The results would greatly assist decisionmaking for budget allocation. However, since the D-2 system parameters are classi ed secret, this analysis must be done outside of an academic environment. 3. The FAA plans deploy the geostationary Wide Area Augmentation System to improve the capabilities of GPS so that satellite navigation can be adopted as the primary means navigation for commercial air trac. GINA is perfectly suited to predict the improvements that this will bring, and could even model the capabilities of augmented operations using GPS, inertial navigation, and existing VHF Omnidirectional Ranging (VOR) equipment. Evaluating the capabilities such a complex system will be dicult, but certainly tractable. It is also suggested that the applicability of the method for missions other than communications, navigation and sensing be determined. There is a growing interest in the DoD to deploy weapons in space, using either directed (photon) energy or mass-drivers. A question arises as to whether GINA could be applied in the design of such systems. Strictly, neither of these concepts feature information ow, and so GINA cannot be applied directly. However, at least with the laser weapons, the mission objective can still be posed in terms of delivering an energy signal to a sink (target) using a satellite network. The sources of the energy signal are the satellites, but they would likely receive command information from 284
allied commanders on the ground. The system is thus a hybrid network, in which information is delivered to a set of satellites, that then act to deliver energy signals to a set of targets. There does not seem to be any reason why the GINA methodology could not be adapted to address these types of applications. Note that, whereas the GINA procedure took over two and a half years to develop, the implementation of the methodology, that being the development of the software used to calculate the results presented in Part 2 of the thesis, lasted only nine months. This is an area where it is recommended that improvements be made. Speci cally, the functional behavior of some modules can be improved to account for higher-order eects (non-Rayleigh clutter, better rain attenuation prediction, improved ionospheric models, etc.) Also, Matlab/Simulink is probably not the optimum platform for implementing GINA due to its restrictive format control on the vectors connecting functional modules. A better choice would have been one of the commercial Computer Based Systems Engineering (CBSE) tools that are more suited to functional ow concepts1 . Integration with one of these existing tools would be the main recommendation for further development of the GINA implementation.
1 see
INCOSE's web site URL=http://www.incose.org/
285
286
Bibliography [1] San Francisco Bay Area Chapter International Council on Systems Engineering. Systems Engineering Handbook. Technical report, INCOSE, January 1998. [2] Shigeru Mizuno and Yoji Akao, editors. QFD: The Customer-Driven Approach to Quality Planning and Deployment. Asian Productivity Organization, 1994. [3] W. J. Larson and J. R. Wertz, editors. Space Mission Analysis and Design. Microcosm, Inc. and Klewer Academic Publishers, second edition, 1992. [4] C. Gumbert, M. Violet, E. Hastings, D., W. M. Hollister, and R. R. Lovell. Cost per Billable Minute Metric for Comparing Satellite Systems. Journal of Spacecraft and Rockets, 34(12):837{846, December 1997. [5] A. Kelic, G. B. Shaw, and D. E. Hastings. A Metric for Systems Evaluation and Design of Satellite-Based Internet Links. Journal of Spacecraft and Rockets, 35(1), January-February 1998. [6] C. Jilla and D. Miller. A Reliability Model for the Design and Optimization of Separated Spacecraft Interferometer Arrays. In 11th Annual AIAA/USU Conference on Small Satellites, Utah, September 1997. [7] Douglas Wickert, G. B. Shaw, and D. E. Hastings. The Impact of a Distributed Architecture for a Space Based Radar Replacement to AWACS. Journal of Spacecraft and Rockets, 35(5), September{October 1998. [8] The Satellite Remote Sensing Industry. KPMG Peat Marwick LLP, 1996. [9] A. H. Greenaway. Prospects for Alternative Approaches to Adaptive Optics. In D.M. Alloin and J.M. Mariotti, editors, Adaptive Optics for Astronomy, NATO-ASI, pages 287{308. NATO Publishers, 1993. [10] R. Stephenson, D. Miller, and E. Crawley. Comparative System Trades Between Structurally Connected and Separated Spacecraft Interferometers for the Terrestrial Planet 287
Finder Mission. Technical Report SERC 3-98, The MIT Space Engineering Research Center, Massuchetts Institute of Technology, Cambridge , MA 02139, 1998. [11] E. M. C. Kong. Optimal Trajectories and Optimal Design for Separated Spacecraft Interferometry. Master's thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, February 1999. [12] T. J. Cornwell. A Novel Principle for Optimization of the Instantaneous Fourier Plane Coverage of Correlation Arrays. IEEE Transactions on Antenna and Propagation, 36(8), August 1988. [13] David Bearden. Cost Modeling. In Reducing Space Mission Cost. Microcosm Press, 1996. [14] Jerry Sellers and Ed Milton. Technology for Reduced Cost Missions. In Reducing Space Mission Cost. Microcosm Press, 1996. [15] J. R. Wertz and W. J. Larson, editors. Reducing Space Mission Cost. Microcosm Press, 1996. [16] Robert Parkinson. Introduction and Methodology of Space Cost Engineering. AIAA Short Course, April 28-30 1993. [17] Rick Fleeter. Design of Low-Cost Spacecraft. In J. R. Wertz and W. J. Larson, editors, Space Mission Analysis and Design. Microcosm, Inc, second edition, 1993. [18] G Canavan, D. Thompson, and I. Bekey. Distributed Space Systems. In New World Vistas, Air and Space Power for the 21st Century. United States Air Force, 1996. [19] R. F. Brodsky. De ning and Sizing Payloads. In J. R. Wertz and W. J. Larson, editors, Space Mission Analysis and Design. Microcosm, Inc, second edition, 1993. [20] M. Socha, P. Cappiello, R. Metzinger, D. Nokes, C. Tung, and M. Stanley. Development of a Small Satellite for Precision Pointing Applications. Technical report, Charles Stark Draper Laboratoty, 1996. [21] Project Foresight. 16.89 Space Systems Engineering Final Report, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Spring 1997. [22] Nancy Lynch. Distributed Algorithms. Morgan Kaufmann Publishers, 1996. [23] System Reliability and Integrity. Infotech International Limited, 1978. 288
[24] Robert Schwarz. A Probabilistic Model of Satellite System Automation on Life Cycle Costs and System Availability. Master's thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, June 1997. [25] E. S. Dutton. Eects of Knowledge Reuse on the Spacecraft Development Process. Master's thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, June 1997. [26] Bob Preston. Plowshares and Power, The Military Use of Civil Space. National Defense University Press, 1994. [27] Greg Yashko and D. E. Hastings. Analysis of Thruster Requirements and Capabilities for Local Satellite Clusters. In 10th Annual AIAA/USU Conference on Small Satellites, Utah, September 1996. [28] Ray Sedwick, E. Kong, and D. Miller. Exploiting Orbital Dynamics and Micropropulsion for Aperture Synthesis Using Distributed Satellite Systems: Applications to TechSat21. In AIAA Civil Space and Defense Technologies Conference, number AIAA-985289, Hunstsville, AL, October 1998. [29] G. W. Hill. Researches in the Lunar Theory. American Journal of Mathematics, 1(1):5{26, 1978. [30] C. Swift and D. Levine. Terrestrial Sensing with Synthetic Aperture Radiometers. IEEE MTT-S International Microwave Symposium Digest, 1991. [31] Bernard D. Steinberg. Principles of Aperture and Array System Design. John Wiley and Sons, 1976. [32] R. K. Ahuja, T.L. Magnanti, and J.B. Orlin. Network Flows. Theory, Algorithms and Applications. Prentice Hall, 1993. [33] R. N. Bracewell. The Fourier Transform and its Applications. McGraw-Hill, second edition, 1986. [34] S. Drabowitch, A. Papiernik, and H. et al Griths. Modern Antennas. Chapman and Hall, 1998. [35] J. M. Wozencraft and I. M. Jacobs. Principles of Communication Engineering. Wiley, New York, 1965. [36] E. A. Lee and D. G. Messerschmitt. Digital Communication. Kluwer Academic Publishers, second edition, 1994. 289
[37] D. G. Forney. Principals of digital communication. Printed notes for MIT class 6.451, 1996. [38] Landau H. J. and Pollak H. O. Prolate spheroidal wave functions, fourier analysis, and uncertainty. III The dimension of the space of essentially time- and band-limited signals. Bell Systems Technical Journal, 41:1295, 1965. [39] D. K. Barton. Modern Radar System Analysis. Artech House, 1988. [40] E. J. Fitzpatrick. Spaceway. Providing aordable and versatile telecommunications solutions. Paci c Telecommunications Review, September 1995. [41] Hughes Communications Galaxy Inc. Application of Hughes Communications Galaxy, Inc. for Authority to Construct, Launch and Operate Spaceway, a Global Interconnected Network of Geostationary Ka-Band Fixed-Service Communications Satellites. FCC Filing, July 26 1994. [42] Hughes Communications Galaxy Inc. Application of Hughes Communications Galaxy, Inc. Before the Federal Communications Commission for Galaxy Spaceway, a Global System of Geostationary Ka/Ku Band Communications Satellites - System Amendment. FCC Filing, September 29 1995. [43] Crane R. K. Prediction of Attenuation by Rain. IEEE Transactions on Communications, com-28(9):1717{1733, September 1980. [44] Philip S. IV Babcock. An Introduction to Reliability Modeling of Fault-Tolerant Systems. Technical Report CSDL-R-1899, The Charles Stark Draper Laboratory, Cambridge, MA, 1986. [45] Robert Lovell. The Design Trade Process. Lecture notes from MIT 16.89 Space Systems Engineering class, 1995. [46] R. Pindyck and D. Rubinfeld. Microeconomics. Prentice Hall, fourth edition, 1998. [47] Discoverer-II: Brie ngs to Industry. DARPA Tactical Technology Oce Presentation, June 1998. Available on the World Wide Web at http://www.arpa.mil/tto/dis2docs.htm. [48] Techsat21{Space Missions Using Satellite Clusters. Air Force Research Laboratory Factsheet, September 1998. Available on the World Wide Web at http://www.vs.afrl.af.mil/factsheets/TechSat21.html. 290
[49] Bradford Parkinson and James Jr. Spilker, editors. Global Positioning System: Theory and Applications, Volume 1, volume 163 of Progress in Astronautics and Aeronautics. AIAA Inc., 1996. [50] Aeronautics and National Research Council Space Engineering Board. The Global Positioning System|A Shared National Asset. National Academy Press, 1995. [51] J. F. Zumberge and W. I. Bertiger. Ephemeris and Clock Navigation Message Accuracy. In Global Positioning System: Theory and Applications, Volume 1. AIAA Inc., 1996. [52] U.S. General Accounting Oce. Satellite Acquisitions: Global Positioning System Acquisition Changes After Challenger's Accident. U.S. Government Printing Oce, September 1987. [53] Loral aerospace Holdings Inc. Application of Loral Aerospace Holdings, Inc. to Construct, Launch and Operate a Global Communications Satellite System in the FixedSatellite Service { The CyberStar Communications System. FCC Filing, September 29 1995. [54] Teledesic Corporation Inc. Application of Teledesic Corporation for Authority to Construct, Launch and Operate a Low Earth Orbit Satellite System in the Domestic and International Fixed-Satellite Service. Amendment. FCC Filing, July 13 1995. [55] Motorola Global Communications Inc. Application for Authority to Construct, Launch and Operate the Celestri Multimedia LEO System, a Global Network of NonGeostationary Communications Satellites Providing Broadband Services in the KaBand. FCC Filing, June 1997. [56] New World Vistas|Air and Space Power for the 21st Century, 1995. USAF Scienti c Advisory Board Study. [57] Leopold Canta o, editor. Space Based Radar Handbook. Artech House, 1989. [58] M. I. Skolnik, editor. Radar Handbook. McGraw Hill, New York, 1970. [59] Lamont Blake. Radar Range-Performance Analysis. Artech House, 1986. [60] Nicolaos Tzannes. Communication and Radar Systems. Prentice-Hall, 1985. [61] J. I. Marcum. A Statistical Theory of Target Detection by Pulsed Radar { and Mathemaical Appendix. IRE Transactions IT-6, (2):59{267, July 1963. [62] P Swerling. Probability of Detection for Fluctuating Targets. IRE Transactions IT-6, (2):269{308, April 1960. 291
[63] J. Neuvy. An Aspect of Determining the Range of Radar Detection. IEEE Transactions on Aerospace and Electronic Systems, AES-6(4), July 1970. [64] A.W. Rudge, K. Milne, A.D. Olver, and P. Knight, editors. The Handbook of Antenna Design, volume 2. Peter Peregrinus Ltd., 1983. [65] A. T. Moet. Minimum-Redundancy Linear Arrays. IEEE Transactions on Antennas and Propagation, AP-16(2):172{175, 1968. [66] John Leech. On the Representation of 1,2,...n by Dierences. Journal of the London Mathematical Society, 31:160{169, 1956.
292