Expired News - For Science: Teleporting yourself is a bad idea. Here's why - The Weather Network
Your weather when it really mattersTM

Country

Please choose your default site

Americas

Asia - Pacific

Europe

News

Teleportation, brain mapping and artificial intelligence sound great, until we take a much closer look at them...

For Science: Teleporting yourself is a bad idea. Here's why


Scott Sutherland
Meteorologist/Science Writer

Monday, July 6, 2015, 2:16 PM -

Teleportation, brain mapping and artificial intelligence would certainly make for an interesting future, but when we look a bit closer, we might want to leave these techs in the realm of science fiction.

We're getting very good at turning science fiction into science reality these days, but in some cases, we should probably take a closer look before we make that leap, just to be sure we know what we're getting ourselves into. Here are three that are prefect examples of Sci-Fi technologies that might be better off staying on the page.

Teleportation

In Sci-Fi universes like Star Trek, people did this all the time, beaming from one place to another, and the writers even came up with ideas (with plenty of hand-waving around the details) for how it was done.

The closest we can come in the real world is a process where a person's body is scanned down to the quantum level, so that you have all the information about that body (including memories) - very likely destroying the body in the process of scanning it - then you transmit that data to somewhere else and use it as a blueprint to rebuild the body there.

The person that steps off the pad at the other end will be exactly identical to the person that was scanned, in every way - physically, personality-wise, and with all memories leading up to the scan intact. To them, they are the person that was scanned, and no one - not anyone who knows that person, and not even the universe - would be able to dispute their claim.


Image Credits: Earth - NASA Earth Observatory, Moon - Gregory H. Revera/Wikimedia Commons

Technically, no one would even know that anything was wrong with the process, however there is one fundamental problem here.

In order for the teleportation to take place, the original person is destroyed. From that person's perspective, they cease to exist the moment the scan is done, and they're gone forever. While it would certainly be nice if some unknown aspect of the physics of our universe allowed our consciousness to make the journey as well, a simple application of Occam's razor would suggest that the original is gone and a copy continues on in their place. A good example of this: if the scanner gathered the information about a person's body, transmitted that data to the receiver and then failed to destroy that person, the receiver would still build another "them" at that location. Unless we create some highly improbable scenario, such that quantum entanglement somehow applies to consciousness, the person standing on the scanner would have no connection to the person on the receiver, and vice versa. They couldn't see or experience anything from their distant counterpart. There would simply be two of them in existence in different places.

Star Trek got around this sticking-point by saying that the person's "matter stream" is then transmitted to the receiver (or the surface of a planet) and reconstructed there. In that case, there might not be any issue, but in essence you're still destroying the person and rebuilding them, so it may still represent the same problem. The system becomes a way to clone at a distance, while killing the original person in the process.

So, when it comes down to it, this would be a great way to move inanimate objects. It would make moving far simpler. It would even be great for producing products, like the Star Trek replicators. For moving living creatures, though, the system encounters a moral and ethical hurdle that we probably shouldn't cross.

Upload Yourself, Live Forever?

Have you heard about the 2045 Initiative? The project aims to turn humans from biological to technological entities over the next 30 years.


Credit: 2045 Initiative.

First, sometime before the year 2020, we would have robotic "avatars" controlled via virtual reality. Then, by 2025, these avatars would be sophisticated enough that we could transplant our human brain into one to extend our life. By 2030-2035, avatars would have artificial brains that could contain human thoughts and memories, and then by 2045, we could cast off the robot avatars to experience the universe via holograms.

Sounds great, right? Human immortality within the next three decades.

There's a problem with this, though, which again involves perspective. There's no such thing as "transferring" our memories and personality to a computer. The best we could do is a copy, which would create an artificial version of us. We would still live in our biological body after the copy was made, and we would eventually grow old and die. The copy would live on, but unfortunately, it still wouldn't be us.

The people involved with the project apparently understand this, as their emphasis is on transplanting the brain or transferring memories at the time of death, so as to get around the sticky moral, ethical, and potentially legal problems of going through this procedure.

So, while this is certainly a great idea up to the 2020-2025 stage (human brains in robot avatars), proceeding beyond that really isn't what it's hyped up to be.

AI or Artificial Intelligence

Science fiction writers have been warning us about this for years. HAL 9000, the Cylons, SkyNet and the Terminators, the AIs of The Matrix, VIKI in I, Robot, the Borg, Ultron, GLaDOS and Wheatley, the Reapers of Mass Effect. The list goes on and on. Even two of the most brilliant people alive today - Elon Musk and Stephen Hawking - are pretty set against it, calling it "summoning the demon" and warning that it could "spell the end of the human race."

Artificial intelligence could usher in a new age of advancement and enlightenment. Computers capable of thinking like humans, and ultimately beyond our capabilities, may develop new materials and new technologies that could end up advancing human society by leaps and bounds, potentially turning us into an interstellar civilization. However, there's a running theme in science fiction where AIs and humanity eventually come into conflict with one another, and it seems like it's usually our fault.

Developing an advanced intelligent computer seems to be an inevitability for us, and if some of the more interesting ideas about intelligence turn out to be correct, one may even develop without our direct influence.

If that can happen in such a way that both the AI and humanity understand that the other party deserves the right to exist and the right to be free, and if we can continue on as equal partners into the future, it may be one of the best things to come about through technology.

If that can't happen, and many science fiction stories hinge on the fact that it can't, it would probably be better to leave advanced, sentient AIs to those stories rather than bringing them into reality.

Sources: Big Think/Michio Kaku | 2045 Initiative | Science Fiction/Human Nature

Default saved
Close

Search Location

Close

Sign In

Please sign in to use this feature.