Solving the transporter’s problem

Image of a transporter from Star Trek™ (Copyright by Ex Astris Scientia 2013)

Image of a transporter from Star Trek™ (Copyright by Ex Astris Scientia 2013)

One of the biggest problem with creating an actual transporter is data, too much of it. I have heard a lot of stories about how much data it would take to send a human flying from one end of the world to another is miliseconds. Anything from hard drives stacked from here to the moon up to hard drives stacked from here to the centre of the galaxy (I’m guessing the latter was quite a while back). In order to send the data anywhere fast you’d need a data stream of over a metre wide. Not much in our world, gigantic in data streaming terms.

Luckily we can decrease this stream in a number of ways. First you could use smaller wavelengths of the electromagnetic spectrum (e.g. use X-rays instead of ultraviolet light). trouble is that small wavelengths do not travel through optic fibres, you’d need dedicated satellites to send the data to another transporter. Also small wavelengths of the electromagnetic spectrum damage living tissue so you wouldn’t want planes flying trough it for instance.

Instead of going to smaller wavelengths you could also increase the carrying capacity of optic fibres. A new technology promising to do this is a technology that twist the beam of light into a vortex this enables you to send more signals at once through the cable. This allows for a speed currently clocked at 2,5 terabits per second  (your internet connection is measured in megabits per second, which is a million times less then terabits). It’s so fast you could stream an entire blue-ray film in a fraction of a second. Actually you could stream about 7 in that second.

There is, however, another way to decrease the data you need to send. Just send less. When physicists talk about a transporter they want to send information about every atom in your body to the other side of the world so you get an exact copy of yourself where you want to go. It would however be smarter to only send the most important information and let the computer extrapolate the rest. We’d want our brains to be scanned very precisely because the connections brain cells make are vital to make you who you are. other parts are less important (e.g. the exact place of a certain blood cell in the body which changes constantly.)

When we start looking at the biochemistry in cells we see that we all share a lot of molecules among all humans. From the relative position of molecules we can even deduce their current state (active or not). So instead of transmitting that you have a protein containing iron 4 nitrogen atoms carbon atoms etc. and their relative place you just transmit haemoglobin whether it is carrying oxygen and it’s orientation in the cell. The computer on the other end will just make a haemoglobin molecule, a huge saving in data. Especially if you consider that we are full of standard proteins. Even if they aren’t it isn’t a problem. A more basic building block of the protein is the amino acid. So you can just send the amino acid sequence and orientation, still a time saver. Same goes for DNA which basically can be described by four letters A T G C a string of these letters is basically enough for a computer to know how to sequence the entire DNA in your body. orientation and nearby proteins give an indication if the DNA is being copied, at rest or curled up ready for cell division.

If we would send data in this manner we could save a lot of bandwidth making it possible to actually transfer the data to another transporter in a second and without a metre wide beam. Downside would be that on a cellular level you wouldn’t be exactly the same. Some proteins might be in a slightly different place then in your original body but you wouldn’t know the difference or be able to tell the difference without an immediate scan on a molecular level.

It might sound crazy to do this. It is however a trick we have learned from our own brain. A brain, when compared to a computer, is extremely energy efficient (it burns about 25W/h) and yet has functions no computer can replicate (conscience for instance). It does this by hard-wiring basic assumptions into our brain. We are, for instance very adept at recognizing faces. So adept even that we recognize two symbols, :), as a smiling face. We are even so good at recognizing faces that we can’t even see what side of a mask is the front and what side the back. By using this and many more short cuts the brain can use it’s relatively limited resources more effectively, devote more resources to more pressing matters.

It could even be a breakthrough in medicine. Missing an arm? just deduce how it should look from the DNA and the bodies proportions and you’re ready to go. Rare genetic disease? Filter out faulty genes and proteins and replace them with good ones. Got HIV? Just filter out the Virus’s  RNA. It could even be used to prevent ageing! Every ER might be equipped with a transporter to fix any medical emergency you sustain. There might not even be anything beyond the ER.

Advertisements

The holodeck: Modular Robotics

In the coming weeks I will examine a few technologies that could function like a holodeck. Today I’ll examine the modular robotics.

This is my favourite holodeck replacement because it resembles the original holodeck in Star Trek most closely. Imagine a large room, about 2 stories high. You enter on the first floor, the floor you now stand on is silvery grey, these are robots. Half the room is filled with millions of robots, even smaller then a grain of sand.

The holodeck of Star trek, uses many exotic technologies like forcefields, transporters and replicators to create a realistic fantasy world within a confined space. Though this is great it is also uncertain at best if all the required technologies will ever become a reality. It is way easier to use robotics to do pretty much the same thing, with a little help from holographic projectors maybe.

Modular robotics are like high tech LEGO bricks. Each module is a small computer that has sensors and can connect with other modules. when they interact they essentially become a supercomputer which is able to rearrange itself into complex structures. the modules themselves are responsible for forming into the right objects with the right characteristics (soft or hard, warm or cold, colour, large or small, square or round etc.) while a central computer is responsible for the overall scene that needs to be created (e.g. a house with a bench in front on which a woman sits who is scolding you for being late).

If you walk across a street the scene changes accordingly. What will happen is that on one side of the room object are rapidly constructed and on the other side they are broken down just as fast. The robots get from one side to the other in a way that is not unlike the ocean currents. on ground level the robots move in on direction and underground a torrent of robots moves in the opposite direction effectively keeping you in the middle of the room. far off objects are projected on the walls and/or created with holographic projectors.

Of course the first generations of these blocks aren’t all that great. The modular blocks are not intelligent and need to be assembled by hand to do anything but they will eventually become more powerful and will eventually gain more and more of the functions I described above. When they get a resolution of a centimetre square (about half an inch square) it could get some applications. For instance in the military, allowing for urban warfare training in a large area or an architect showing a house not even build yet. When they get down to one millimetre square (about 1/25 inch square) it will be good enough to have wide scale applications. From designing a production line and training workers to work with that production line to entertainment purposes. When it gets down to the size of sand I think you will have a nearly real virtual reality.

Upkeep is easy, just add a bucket of new modules to replace faulty ones every so often. The faulty ones are detected by the modules around it and kept apart until they can be discarded by the user. Further along the line the faulty ones will be filtered out and repaired or recycled in a special part of the ‘holodeck.’ Which will eliminate upkeep altogether. On the downside: so many robots and computers will require a lot of power. In order to meet the power demand we will need new sustainable sources of power like solar, wind, geothermal or fusion power. Another downside is that it requires a relatively large space.

Robot workforce: Turbulent times

As robots become more complex they will replace more and more jobs. In these articles I will examine the implications of increasing unemployment. Today the second of five articles.

The first countries to notice the shift in employment are of course the third world countries. The least educated are always the first to suffer and the least educated can be found in third world countries. This will mean the income divide between rich and poor countries will once again grow and that any progress made in the recent years will become undone. Food shortages will lead to riots and revolts. Third world countries do have an advantage to other countries because a relatively high percentage of the population is still employed in the agricultural sector and those people can, in effect, feed themselves. For people in cities the economic downfall might be harder.

In the typical European welfare state we won’t see all that much at first. The jobless will get benefits from the state. Businesses might pay a higher income tax helping the state deal with the higher unemployment. Unfortunately this cannot last and eventually the welfare state collapses. In several years at best or in several days in a worst case scenario many people will be without income of any kind. Countries without this support system will find more and more people roaming the streets. Welfare organizations will be overwhelmed. When the support system collapses protests and even riots may be prone in many cities. It is not unthinkable wars for resources will break out.

We will see a rise in anti technology parties similar to the extreme right fascist parties. They will not target foreigners (as much) but will orchestrate terrorist attacks against companies they feel are responsible for their misfortune. As unemployment rises the fascist parties might even gather a considerable following. Some countries might even get fascist governments distrusting other countries which seemingly embrace these new technologies. This will potentially destabilize the area in which such a country is situated creating tension in the region.

Fascist government or not all governments will initially react by suppressing the advancement of robotics in the workplace. It is human nature to shun new things and try to hold on to what you had, especially when you were in the lead. We have seen this for instance with the entertainment industry trying to fight a losing battle against internet piracy. This is a temporary measure and cannot work in the long run. Countries embracing the new technologies will eventually find themselves in the lead. Their increased production will help to undercut prices in other countries helping them recover from the recession. When others get in on it we will see a new global economic boom and a new society structure will emerge.

Will Moore’s Law hold true

Moore’s law states that the number of transistors in an integrated circuit doubles (roughly) every two years. This has held true ever since the first observation by Dr. Moore in the 60’s. It has been at the basis of our rapid increase in technology. There might be a problem however. The current technology is pretty much at it’s max. Transistor increase is slowing in the next few years and might actually stop eventually unless we get a new technology altogether.

I think we will find something to perpetuate Moore’s law. First is that our current technology isn’t the first inception of computer technology. Before integrated circuits we had transistors, vacuumtubes and even computers based on mechanical components. All these followed a similar pattern in the increase of computing power over time. This makes it likely to say that another technology will come along soon so that Moore’s law will stay true.

We still have a long way to go before computers can’t get any more powerful due to the basic laws of physics.