Laser Communication with Spacecraft

New technology promises to take laser communication with spacecraft from science fiction to science fact. This technology offers a new answer to a 60-year-old problem: you have had your launch and your shiny new space probe has made it to orbit without being ripped to shreds by tons of exploding chemical fuel, but once it’s up there, how do you talk to it?

The conventional answer is radio waves, but the rate of information transfer is woefully slow – just ask any NASA scientist who has tried to distill usable data out of probe transmissions. There are no ‘subspace’ communications in the real world, just lots of waiting while the precious data rolls in at the same old pace.

But, thanks to new laser technology, this is about to change.

Radio waves have been the standard since the dawn of spaceflight, but the new optical communications has the potential to increase that rate by as much as 10 to 100 times. That means instead of painstakingly assembling still photographs, we could actually get high-res photos or even video from the surface of other planets, or moons like Titan. How cool is that!

This new communication system will also be crucial as spacecraft are sent further into the solar system, stretching conventional radio transmissions to the limit.

The key factor is that while both radio and lasers travel at the speed of light, lasers use a higher-frequency bandwidth, allowing the transmission of much more data. The typical rate of information transfer might around a few megabits per second (Mbps). For example, NASA’s Mars Reconnaissance Orbiter sends data at a maximum rate of around of 6 Mbps. Using laser technology of equivalent size and power rating would probably increase this to 250 Mbps – a huge improvement.

There are some possible wrinkles though. Clouds and atmospheric conditions can cause interference in laser transmissions. And receiving those transmission will require a whole new Earth-based infrastructure – preferably in areas with clear skies.

Radio’s reliability will ensure it will endure as a communication method, but the new technology will continue to step closer to widespread application. The Laser Communications Relay Demonstration (LCRD), led by NASA’s Goddard Space Flight Center, will launch in 2019. This probe will test signals between two new ground-based stations and geostationary orbit, a distance of 40,000 km. This will be followed by the Deep Space Optical Communications (DSOC) probe, led by JPL, in 2023, which, along with other science goals, will test transmissions between Earth and its target, a nearby metallic asteroid.

In my novel, The Tau Ceti Diversion, the explorers use laser communications to stay in contact with Earth – well at least until an unnamed saboteur puts the beam out of alignment:) Read more about the story and check out the free sample chapters on Amazon!

Tau-Ceti-Diversion-severed-ebook-cover (Medium)

Our Closest Earth-like Planet

In an amazing stroke of cosmic luck, our closest Earth-like planet Proxima b turns out to be orbiting our closest star, Proxima Centauri, only 4.2 lightyears away!

The Kepler space telescope has been expanding our knowledge of exoplanets – planets outside the solar system – for years now. The number of confirmed exoplanets from Kepler now exceeds 3000, and the rate at which our knowledge of these planets is increasing is truly amazing. Kepler is able to give us data on planets thousand of lightyears from our own little corner of the universe.

So it came as a surprise, a number of months ago, when the very closest potentially Earth-like, habitable planet, turned out to be so close. Unlike its very bright neighbours, Alpha and Beta Centauri, which can easily be seen with the naked eye, you need special equipment just to see Proxima Centauri!

proxima-centauri-b-planet

Photo: space.com

Proxima Centauri is a red dwarf, one of the most common stars in the universe. In a bit of stellar Karma, it turns out that little stars like Proxima have much longer lifetimes that the bigger, brighter white or blue stars, or even our own yellow star, surviving for trillions of years – plenty of time for life to take hold if the conditions are right.

Astronomers have been trying to unlock Proxima Centauri’s secrets for more than 15 years, using two instruments from the European Southern Observatory in Chile – the Ultraviolet and Visual Echelle Spectrograph (UVES) and the High Accuracy Radial velocity Planet Searcher (HARPS). Both instruments focus on deciphering the star’s ‘wobbles’. So why did it take so long? The detection was made more difficult by sparse data, and the long-term variability of the star itself, which masked the presence of the planet. With new, key observations made in 2016, the astronomers were able to confirm not only Proxima b, but also reveal indications of a possible second planet with an orbital period of between 60 and 500 days also orbiting around Proxima Centauri.

Observations indicate Proxima b is around 1.3 times heavier than Earth, putting it into the rocky planet category. Although the planet is in the habitable zone, it orbits at only around 7.5 million kilometres, completing an orbit every 11.2 Earth days. Due to the closeness to its host planet, astronomers consider it likely that the planet is tidally locked, divided into halves of night and day, and always showing the same face to Proxima Centauri. Earth orbits at 150 million kilometres, much further out from our brighter, hotter sun, but still in our habitable zone. The temperature is right on the planet for surface water to exist, but much depends on the planet’s history. If its star was very active, the water may have been blown away in its early formation, whereas if the planet migrated inward at a later stage, it might be water rich.

So Proxima b’s in the habitable zone, which means it may have surface water, but will it have life? On the pessimistic side, it turns out that Proxima Centauri emits powerful flares and X-ray radiation. That may work to erode the atmosphere of the planet, although we don’t know how much because we don’t know if the exoplanet has a nice, strong magnetic field like Earth that would help to preserve the atmosphere and protect any developing life.

We need to go and have a look. But how to get there?

If we could shrink down to about two inches tall, we could hitch a ride on something like NASA’s New Horizon’s probe, which managed its trip to Pluto in around 9.5 years at around 84,000 km/h. That would get us a sneak peek of Proxima b in around 54,400 years. Hmmn. Or maybe the hotshot Juno probe that reached a whopping 265,000 km/h? That would cut the trip to 17,157 years.

One option is to accelerate a small probe with solar sails to relativistic speeds using a high powered laser. Just such a thing has been proposed by the Breakthrough Starshot initiative. For around $18 billion we could build a system that would send wafer-thin probes to Proxima Centauri. The Earth-based laser would accelerate the probes to around 20 percent the speed of light (215.85 million km/h). That would get the tiny probes to Proxima Centauri in 20 to 25 years. What these small probes could tell us will rely very much on how powerful their miniaturized instruments were, and of course scientists being able to conceive a way for a targeted message to reach Earth with the data.

It’s exciting that we have an Earth-like planet so close to our solar system. How we get there is one thing, but if human history tells us anything, once we want to go there – we will find a way.

My novel, The Tau Ceti Diversion, a story about our search for new planets to colonise outside our solar system, and is now available on Amazon! Read more about what happens in the story here!

Check out the free chapter download!

Tau-Ceti-Diversion-severed-ebook-cover (Medium)

Going Faster Than Light

Going faster than light is the Holy Grail of space travel, and is often depicted in science fiction. It seems as easy as flicking the switch to jolt the ship into Hyperspace. I mean, it worked for Han Solo, right?

It was Einstein who first postulated the idea that the speed of light is constant in any “frame of reference”. Basically, no matter how fast you were going, light would always be moving away from you at the same speed. As counter-intuitive as this was, his theories of special relativity and general relativity have been borne out by direct observation and experiment.

Just about all of us use GPS data on a daily basis, with signals pinging from our smart phones through our networks to global satellites. The clocks on those GPS satellites all run slower than those on Earth, a direct prediction of relativity, and corrections are used on a routine basis to bring them into line with their “stationary” counterparts. Astronomers also routinely use Einstein’s predicted ‘gravity lensing’ to make observations of the universe, and have used this technique to pin down the enigmatic ‘dark matter’ that makes up so much of our universe.

So if Einstein’s predictions tell us we can’t go faster than the speed of light, is that it for our desire to go speeding through the Universe in our faster-than-light spaceship? Interestingly enough, not necessarily. . .

There are two potential loopholes than emerge from Einstein’s work, and both of them have to do with the way spacetime can fold up. The ‘warp drive’ and the more familiar idea of wormholes.

The warp drive, originally a concept from science fiction, is familiar from just about every episode of Star Trek. The idea for the warp drive is that spacetime would be expanded behind the spaceship, and compressed in front of it, to such a degree that the ship would seem to flash through vast distances in moments. The ship itself would not actually be moving, but be inside a ‘warp bubble’. This is a pretty exotic solution of Einstein’s equations, but physicists have shown that it is possible – at least mathematically. Despite moving so fast, the astronauts would not be subject to any inertial effects because they are not actually moving. They would, however, be in a state of ‘free fall’, due to the angle of folded space in front of them. Some people have questioned whether our warp drive pilots would get cooked by intense, blue-shifted light, but the jury seems to be still out on that one.

The warp drive has been dubbed the Alcubierre drive, after the physicist who first proposed this solution. Believe it or not, the theory was evolved by Alcubierre in response to the use of the warp drive on Star Trek. The travellers on the warp drive capable ship would be cut off from the outside universe, riding on a ‘wave’ of compressed space, along a corridor or warped space-time that would probably have to be constructed in advance, like some sort of cosmic superhighway. Alcubierre himself muses “We would need a series of generators of exotic matter along the way, like a highway, that manipulates space for you in a synchronized way”.

The graphic below gives a 2-dimensional representation of the spacetime around the ‘warp bubble’, stretched to create a gradient pushing the ship forward. Just don’t try to leave the bubble – you would get ripped apart.

Alcubierre space time

To make the Alcubierre drive work we need a pretty exotic fuel – either negative matter or negative energy to be precise. Now that’s negative matter – as apposed to dark matter (which is invisible but has weight) or antimatter (positive energy but reversed charge). Both dark matter and anti-matter have been proven to exist. So far there is no proof that negative matter exists. If it did, it would fall up rather than down, and would have left any solar system long ago (being repelled by ordinary matter) and be drifting out in the middle of nowhere somewhere. So finding negative matter is going to be hard, but perhaps possible using gravity lensing techniques.

Negative energy, though – believe it or not – has been demonstrated by experiment.

In the experiment, two plates in a vacuum, positioned very close together, experience a net movement toward each other because of the ‘pressure’ difference of virtual particles being created at the quantum level around and between the plates. These are electron-antielectron pairs that burst out of nowhere for incredibly brief periods of time, then disappear as they collide (preserving the average energy stat). As brief as their appearance is, the particles create a real effect. That ‘pressure’ causes the predicted movement in the plates, and that equates to a net amount of energy. Since that energy is coming from ‘nowhere’ (and energy must be conserved) to make the whole system balance the plates have a net negative energy left between them. And how much? The effect, called the Casimir effect, was measured in the laboratory in 1996 at Los Alamos. The attractive force is the equivalent to 1/30,000 of the weight of an ant. We would need a lot more than that!

As a civilization, we are a long way from any faster than light travel, even if it is possible. It’s true that Einstein’s equations give solutions that show the possibility of both the warp drive and even wormhole travel, but are these real possibilities, or mere mathematical curiosities? If it is possible, we would need an awesome amount of energy. It’s estimated that to keep a transversable wormhole open wide enough to allow human travellers to pass through, you might need as much as a Jupiter mass of negative energy. That’s clearly well beyond us now.

That doesn’t mean we can’t reach the stars, just that we can’t get there quickly!

Fusion drives, or even antimatter drives, or a combination of the two, will enable us to construct starships that could travel at respectable fractions of the speed of light.

In my novel, The Tau Ceti Diversion, the starship Starburst uses a fusion drive, assisted by an antimatter ‘burst’ to reach a new solar system and look for planets to colonize. Much of the action in the book takes place on planet tidally locked to Tau Ceti, some 12 lightyears away.

The novel was officially launched on 1st September 2016, and is available in both electronic and print formats! Grab a copy!

Tau-Ceti-Diversion-severed-ebook-cover (Medium)

 

Capturing our First Planetary Snapshots

 

Kepler has confirmed more than 1000 planets outside our solar system, but so far only a few of Earth-like size and in the habitable zone — rocky planets with just the right temperature for liquid water. And none of those potential Earth-analogues have been observed directly, but through the interpretation of astronomical data, such as the wobble of the star, or the dimming on the star’s light due to planetary transit.

starshade20140320-full

So far, some pictures of other planets have been taken from ground-based telescopes, but those planets are large, bright and orbit far from their suns — not like potential Earth-twins which will be far smaller and orbit closer to their suns.

NASA scientists and engineers are working on two new technologies to help look for new planets, a starshade and a coronagraph, which will both work to block the light of the star, allowing the telescope to examine the reflected light of the planet itself.

This means we can not only take pictures of prospective Earth-like planets, but also use spectrographic analysis to analyse what in their atmospheres as well. This will give us clues to what might exist there. For example, evidence of plant life and animals similar to those on our Earth would show up as a series of simple signature compounds in the planet’s atmosphere: such as oxygen, ozone, water and methane.

A starshade is a type of spacecraft that actually flies in front of the telescope to block the light of the sun under observation. Despite the fact it will be only tens of metres wide, it will fly quite a bit in front of the telescope — in fact around 50,000 km away — more than four Earth diameters. Getting it into space is a challenge. It will be folded up like a super-origami prior to launch to unfurl in space,  somewhat like an unwinding spring, into to a crazy-sized sunflower. The pointed petals are crucial to its design: they control the light the right way to reduce the glare to levels where planets can be seen. The petal-fringed shape creates a softer edge that causes less bending of the light waves.

Both the starshade and the telescope will be independent spaceships, allowing them to move into just the right position for observations. The petals of the starshade need to be positioned with millimetre accuracy.

Blocking out the starlight while preserving the light emitted from the planet is called starlight suppression.

The light of a sun can be billions of times brighter than the reflected light from the planet. Our own sun is 10 billion times bright than Earth.

Coronagraphs were originally introduced in the early 20th century to study our own sun, blocking out the light from the sun’s disk to allow scientists to study its outer atmosphere, or corona; hence coronagraph. They are much smaller than the starshade, located within the telescope itself.

These starlight-blocking coronagraphs will be more sophisticated.

These new generation coronagraphs uses multiple masks as well as smart mirrors that can deform, to suppress starlight in sequential stages. There are many other challenges in delivering the coronagraph technology, including being able to suppress or compensate for the warping and vibrations that all space telescopes experience.

What Space Tourism Needs

Cropped A3 Poster with Red Button

Want to get into space? Heck – who doesn’t!

In the early days of space exploration the vehicles were the equivalent of experimental coupes with no room in the back. Rockets like the Saturn V had a lot of power under the hood, but the capsule had no seats for the kids or friends.

Kudos to Virgin Galactic for taking the next step, with vehicles for up to six passengers. These lucky six will be paying anywhere from US$95,000 to US$250,000 depending on the length of the journey. This upgrades us from two-seater to an Orbital Minivan, but really this is still only an extreme sport for the super wealthy. Maybe not the spouse in the passenger seat and kids in the back — more like the CEO and his lucky executives.

True space tourism would be closer to the model we have today with commercial aviation, opening up the unique travel and leisure opportunities for a wider population. That would require something akin to a tourist bus.

Interestingly, the designers of the Space Shuttle originally intended it to be used as far more than a cargo carrier, with some designs carrying up to 74 passengers in a modified rear compartment or ‘passenger module’. Check out the graphic below (attribution: chron.com blog).

the_future_of_space_tourism_6 - chrondotcom blog

Even more fascinating is the fact that the Shuttle was also originally conceived with a reusable manned booster. The problem was the manned booster was about the size of an aircraft carrier. Yet if they had managed to build it, the overall cost of spaceflight might have dropped substantially, taking advantage of the fact that the fuel is only about 1% of the cost of getting into orbit.

It is interesting to note that the development of reusable boosters (unmanned), is the key focus of Space X for this exact reason (i.e. that the hardware is where the real cost is, not the fuel).

If only. . .

 

 

Reusable Rockets One Step Closer

Hey, did I mentioned the three books in my Jakirian Cycle are out:)

New Calvanni CoverScytheman CoverSorcerer Cover

 

For a while now Elon Musk’s Space X has be busily working away at developing a reusable rocket system, with both a first and second stage that can be reused with hours of return.

The Grasshopper rocket is the test vehicle for the reusable first stage. Earlier in the year this reached a height of 325m and then touched down again. In its latest test flight on October 7,  the Grasshopper reached a height of 744m and landed right back down on the launch pad. It’s an awesome thing to see. Check out the footage, which was captured by a remote controlled hexacopter stationed in the sky. The rocket lands on a dime. Amazing control.

Hey, Elon Musk, can I come work for you? I’m a real engineer, honest.

OK. Back to reality.

The plans are to continue to extend the height at which the Grasshopper stops and returns to the launch pad.

In the meantime, Space X has progressed the other part of the proposed testing regime by performing the first test on a returning Falcon 9 first stage booster. The Falcon 9’s engines were re-lit twice on the way down during the September 29 test flight from Vandenberg Air Force Base in California. The two burns eased the vehicle’s return to Earth, where it eventually splashed down over the Pacific Ocean.

The Falcon 9 v1.1 carried Canada’s CASSIOPE space-weather satellite and three smaller spacecraft to orbit. As its first stage fell back to Earth, the secondary test program was initiated.

The first burn ( where three engines were ignited for supersonic retro propulsion) enabled the returning first stage to survive atmospheric re-entry without burning up. The second burn (with a single engine) went well, although the splashdown was a little harder than planned due to a roll developed by the returning vehicle.

Exciting stuff, and right in line with Space X’s stated development path. The company now believe they have ‘. . . all the pieces to achieve a full recovery of the boost stage.’

One step closer to a true reusable rocket and a system with will get us ‘up there’ at last:)

200 Year Old Technology Makes it into Space – The Stirling Radioisotope Generator

For those of you who have not heard about the Stirling engine, the technology was first proposed by Scotsman Robert Stirling way back in 1816 as an alternative to nasty steam engines, which had a habit of exploding and killing people with high-pressure steam. In nineteenth century steam engines, water inside the pressure vessel was in two phases – steam vapour and pressurised liquid, so in the case of a rupture there was an instant expansion of hot liquid into steam.

Often called an ‘external combustion engine’, Stirling engines are a sealed system with the cylinders inside working with a gas, such as air or nitrogen, which exists in a single phase.

The physical layout of the Stirling engines varies, but all have a ‘power’ piston and a ‘displacer’ that works in concert with the power piston to maintain the constant volume conditions. Each engine has a hot and cold end, with a heat exchanger at each. Inside the engine is a ‘regenerator’,  which is a physical material that stores part of the heat as it flows inside the engine and is crucial to its operation.

Stirling engines have been demonstrated at temperatures well below 100oC. The Ultra Low Temperature Difference Stirling engine was demonstrated to operate at a hot side temperature of just 0.5oC. Like any heat cycle, it is driven by temperature difference, so a low hot side temperature must be balanced by an even lower cold side where heat can be rejected. In practice low temperature differential Stirling engines require a very large surface area for heat transfer and are consequently more expensive to manufacture than high temperature Stirling engines.

The real advantage of Stirling engines lies in their heat source flexibility. The same Stirling engine can operate with a wide range of fuels and over a wide range of temperatures.

NASA have been working for some time on a small Stirling engine for use as a power supply on spacecraft. Called the Advanced Stirling Radioisotope Generator (ASRG), it is driven by the heat from radioactive decay.

Around 1kg of Plutonium 238 forms part of the module. This generates a thermal output of around 500 Watts. The heat drives a small, single cylinder Stirling engine that produces around 140 Watts of electrical power.

Like all Stirling engines, the ASRG is a closed-loop engine. It’s internal working gas will be helium. In its single cylinder the up-down motion of the power stroke is converted into an AC electrical output by a linear alternator. This is then converted to the DC required by on-board systems.

Why would NASA bother putting something with that many moving parts on a spacecraft? Well for a start, Stirling engines are very reliable, and a large part of the work the NASA is undertaking is focussed on reliability studies for the ASRG. But primarily, the ASRG will be four times more efficient per unit mass than the Radioisotope Thermoelectric Generator (RTG) it replaces. That is an impressive increase in efficiency. The RTG modules have been standard on spacecraft for the last 40 years, and use the temperature differential in thermocouples to produce power.

To reduce vibration, two ASRG  units will be mounted opposite each other and synchronised so their pistons move in opposite directions to eliminate mechanical noise.

An RTG system has a typical efficiency of around 5-7%, disappointingly low considering it is driven by 850oC from the Plutonium power source. The ASRG’s Stirling generator would operate at around 38% efficiency with the same 850oC hot end (with heat rejected the lonely depths of space at 90oC). In practice the ASRG’s hot end temperature, and consequently, net efficiency is expected to be a little lower.

The ASRG was demonstrated for the first time in 2012 – the first demonstration of a new nuclear system for power production since 1965. There are also moves to produce more Plutonium, again for the first time since 1965.

The ASRG could be available as early as 2015, and is designed to have a 14 year mission life.

Larger versions have been proposed to power a potential Moon base, and also a Mars base under the NASA Fission Surface Power project. So far a 40kWe version has been trialled in NASA labs (minus the nuclear fuel source i.e. just the Stirling engine component with conventional heat applied). This 40kWe version is likely to be the size of a trash can, and would provide surface power for decades with little or no maintenance.

Around 40 kWe is about the size of generator you need to power a small hybrid-electric vehicle. Maybe NASA would consider selling Plutonium cars to the public? It would be cool to drive around for a couple of decades and never fill up. When you are not driving you could plug it into your house and power both you and your neighbours.

Hey, it’s nice to dream:)

Space X Grasshopper Reusable Rocket

If you’ve been reading this blog for a while, you probably heard me talking about Elon Musk’s Space X and the plans to develop a reusable rocket system. The theory is that it’s the cost of space craft that overwhelmingly contributes to the high cost of getting into orbit. The fuel itself represents perhaps 1% of the total cost. So if you can develop a truly reusable rocket system you can potentially revolutionise space travel. There are a few parts wishful thinking in this, and a few parts hyperbole, but it’s an intriguing concept nonetheless. Meanwhile, Space X is forging ahead.

Space X have been developing a reusable system based on their Falcon 9 launch vehicle platform. This launch vehicle is a pretty familiar sort of beast – a two stage rocket powered by liquid oxygen and kerosene. It has established a solid performance record to date and was used by Space X for a visit to the International Space Station, the first by a commercial company.

The Space X Grasshopper is designed to take the place of the Falcon 9’s first stage. It has been in active testing since September last year. So far it has had six test flights, each gradually extending the height at which the rocket stops, hovers then touches back down. Both take off and landing are vertical (VTVL). The latest (check here for video) took the venerable Grasshopper to 325m (1066 feet), with an overall duration of 68 seconds. It’s likely the tests will extend substantially, perhaps reaching altitudes of up to 91 kilometres (57 mi) with the second generation of the test craft.

If you want a bit of entertainment, check out this video of one of the early tests that plays to Jonny Cash’s ‘Ring of Fire’. LOL.

The second generation of Grasshopper will have lighter-weight landing legs that actually fold up into the rocket. I can’t help but be reminded of those sleek 1950s art-deco SF rockets than come down to land on their legs in such a similar manner, except they (of course) had three legs whereas Grasshopper has four. The Grasshopper’s legs use a telescoping piston on an A-frame, actuated by high-pressure helium.

Plans are to start testing the decent of Falcon 9 first stages to confirm the technology. Each first stage of the Falcon 9 will be equipped and instrumented as a controlled descent test vehicle. They will initially do the propulsive return tests over water until they can complete a return to the launch site with a powered landing, perhaps as early as 2014.

Ultimately the first stage separation will occur at around Mach 6, rather than the current Mach 10 for the expendable version of the Falcon 9. This is to ensure there is sufficient fuel for deceleration, controlled descent and landing.

I have a feeling that once this system is up and running, expendable launch systems will seem like the crazy idea, not reusable ones!

But the Grasshopper, as impressive as it is, is only half the launch system. The first stage will separate and be back on the launch pad minutes after the launch. The reusable second stage will take up to 24 hours to return to the launch pad, to allow for orbital realignment and atmospheric re-entry. Both stages are envisaged to be available for reuse within hours of return.

Eventually the reusable launch system technology will be applied to both the Space X Falcon 9 and Space X Falcon Heavy launch vehicles.

I think we are watching history in the making.

The Return of Air-Breathing Engines

I was reading recently about the Skylon space plane. A pretty cool name, which reminds me of those robotic guys with the light bouncing back and forward where their eyes should be  – the vintage Cylons of Battlestar Galactica.

The Skylon spaceplane is a concept for a Single Stage to Orbit (SSTO) plane, which has been a holy grail for the aerospace industry for many decades.

Although the theory of payload Vs rocket mass takes concepts in the direction of multi-staging and non-renewable spacecraft – such as the good old Saturn V and modern equivalent the SpaceX Falcon 9 – the ability to reuse the same spacecraft also makes good economic sense. All rocketry components are damn expensive. Besides it’s such a damn cool idea to be able to get into a spaceplane at the local airport, taxi down the runway and blast into orbit.

What may make this particular SSTO dream feasible is the return of the air-breathing engine. Some of you might remember the HOTOL concept from the 1980s.  The moniker stood for Horizontal TakeOff and Landing. I remember being really excited about this joint venture between Rolls Royce and British Aerospace, but apparently funding was cut in 1988 due to serious design flaws and lack of advantage over contemporary launch systems.

Like HOTOL, Skylon features air-breathing engines that use oxygen in the atmosphere as the fuel oxidant [it later switches to liquid oxygen in space]. The majority of fuel tankage is reserved for hydrogen, removing one heck of lot of weight compared to say a shuttle with its big external tank of hydrogen and oxygen. One key feature of the Skylon’s SABRE engines is the cooling of the intake air, which enables a doubling of the efficiency.

The estimated top speed of Skylon is over 30,000 km/h. This gives the craft plenty of scope to fill the niche left by the ill-fated Concorde, with sub-orbital flight times of around 4 hours from London to Sydney. Having suffered through two 30 hour flights to the USA in economy I can’t wait.

The initial goal is to carry payloads to space stations by 2022. English developer Reaction Engines hope to have a working prototype flying by 2016, and a fleet of the craft over the next decade. They are impressive craft. Each will be approximately 82 metres in length with a price tag of around $1.1 billion US.

The spaceplane is a very sleek looking craft. Check out the wikipedia page for graphics.