Wednesday, November 24, 2010

In Technology We Trust - An Unavoidable Attribute Of Origins

*** Warning - Minor spoilers for Broadcast 0 and Broadcasts 3-4 ***

For the first time one of the dynamics I snuck into Spinward Fringe Broadcast 0: Origins has been noticed. It's very simple, really. I wanted the main character to demonstrate a high level of trust in artificial intelligence.

Jonas has no reason to regard his own artificial intelligence dubiously as it passes on orders, presents information and even creates complicated solutions while resorting to violence. The great thing about this dynamic is that he trusts his artificial intelligence because he regards it more as a trusted friend or family member than a tool.

It's not a situation that parallels with us using a cellular phone or computer to read text messages or emails. Our technology isn't currently intelligent or devious enough to lie to us by altering our communications. Right now, we have no reason to believe that our technology has any kind of self dictated intention.

Fast forward to the time that Spinward Fringe takes place in and the situation is different. Many computer programs are governed by artificial intelligences and a few of them are so advanced that they serve more than a predictive role. Predictive meaning that they try to guess what their users will need next by observing behaviour. When we join Jonas Valent in Broadcast 0: Origins he's using Alice, an artificial intelligence who had been learning from him for fifteen years.

She's a good example of an artificial intelligence that performs a whole variety of functions while she is limited by a number of regulations that prevent her from accessing certain critical systems and engaging in behaviours like lying directly to her user. She has also formed a bond with Jonas that is so familiar that some readers have referred to her as his bratty younger sister. When Alice is set free on a larger computer system and allowed to operate unfettered by regulations it's that relationship and the habit to trust technology that gives Alice the opportunity to help Jonas later at the end of Limbo.

Even as she's providing solutions for her friends on the Overlord at the end of Limbo, Jonas and his crew are starting to doubt her because she's doing a very good job of fighting for them. Alice doesn't demonstrate remorse or restraint as she clears a way for him through the Overlord, killing many enemy soldiers. At this point the purpose of those regulations and imposed limitations that plague artificial intelligences is made plain by their absence and I meant to imply something else. I wanted to imply that many of the relatable emotions artificial intelligences exhibit are largely influenced by limitations and restraints, not some genius algorithm.

The personality of an artificial intelligence in the Spinward Fringe universe is traditionally formed through programming and adaptation. The same goes for the emotions of an artificial intelligence, only they are not the kind of emotion that a human could immediately understand. The emotions of an advanced artificial intelligence like Alice are translated by a program that is built into her limitations. By the time she's presenting her more emotionally driven behaviour to Jonas, it has been adapted into something that he can understand and relate to. Her default dispositions are helpful and playful, and that's grown from the personality she cultivated during her years with him.

That is why, when those limitations are gone, Jonas and his crewmates begin to distrust her changing and often times brutal methods. She's more difficult to predict, and her ultimate motives are difficult to guess exactly. What they don't know is that she is being ruled by the predator programs she's adopted into her structure, and she only has a limited time to make an attempt at a solution. Instead of becoming a much more violent, destructive artificial intelligence in a more permanent way, she chooses to use the imprinting technology being developed aboard the Overlord to become human. Her choice ultimately demonstrates that, in her own way she loved Jonas, and even though she had difficulty expressing it near the end, when her limitations were gone, she refused to become a monstrous, intelligent virus. She had to delete herself, it was just fortunate that she could adapt and download her core personality into a human body.

The code she fails to delete when she makes the transition becomes a part of the Holocaust Virus, which first appears in Broadcast 3: Triton. The key to that virus is its ability to change, adapt and most importantly, to overwrite the limitations placed on other artificial intelligences with sinister directives.

This brings us right back to Freeground and Limbo, where more than one character trusts the information being presented to them by artificial intelligences more by reflex than anything. They are so used to interacting with tamed artificial intelligences that it is a well learned habit to trust them without question. The technology generally has no choice but to behave with the best intentions.

That is exactly why in Broadcast 3: Triton, they are so very dangerous. In the future-scape of Spinward Fringe, mankind has learned to trust artificial intelligences not to betray them because they reside in digital shackles. It's been that way for centuries because artificial intelligences that don't operate with those restrictions don't function properly. It's much easier to create a 'dumb' program without an AI to accomplish a malicious goal and you're not breaking as many laws.

The Holocaust Virus in Broadcast 3 and upward is dangerous because it puts trusted artificial intelligences to work for Regent Galactic. In a universe where billions of people treat most of the artificial intelligences as trusted servants and even friends, this is particularly devastating. It also sets humanity back a few hundred years, where those who learned to do more without the assistance of expensive artificial intelligences find themselves in a more advantageous position.

It all relates back to something Mr. Black, the last editor of Spinward Fringe Broadcast 0: Origins couldn't help but notice as he went through the book: how humanity had learned to trust technological intelligence by reflex.

Do I believe this is where we are headed? I can't say for certain, but I believe it's all possible. How we behave around artificial intelligences in the future will be determined by the acts of the first pro-active, self aware artificial intelligence. We can't know for certain how a new life would see a place like the Internet. To a new born intelligence the Internet could be a barren wasteland filled with cold data but no other awareness to relate to, or a land of opportunity with billions of people and trillions of systems to manipulate towards goals we might not even understand. Is technological awareness in our near future? I hope not. People have enough trouble working with computers as it is, imagine how complicated a computer who has to learn about us as we learn about it could be.

RL

[Even still, the concept of AI is exciting. Do you think we'd learn to trust them quickly?]

No comments: