social network

The terminator will not kill you if you become one! What challenges are left to us by artificial intelligence and Big Data

Even in comparison with 2000 it can be said that we are very fortunate: we are already living in the future, which the average man could not dream of 20 years ago. We just got used to and do not notice all those changes that creep into our life. Meanwhile, experts in the field of analysis, Big Data and artificial intelligence have taken hold of the pens and are ready to change the way of our life in the coming decades. Should I be afraid of AI and what should we do with “big data”? We are fantasizing and understanding together with Huawei  .

Autonomous robots and a model of the brain

Even among scientists who are directly engaged in research in the field of AI, there is no general opinion, when we will create machines that are similar in their abilities to humans. Some are confident that a breakthrough will happen in the next 20 years, others note that such a complex problem can not be solved even for 1000 years.

When in the 1950s electronic computers were created, the public and even the scientists themselves were amazed at their opportunities and prospects, which opened in the near future. Already in the 1960s, the AI ​​pioneers said that in 20 years humanity will create robots capable of any work that a person can do. Oh, how wrong they were.

Since then, most developments in the field of AI, flashed in the media, were more like circus performers trained in the same trick. For example, to play chess. There was no question of any test for intelligence in their case. These were programs that had no competence outside chess. Even Deep Blue from IBM, who beat Garry Kasparov and was a miracle of engineering thought, was an extremely limited device that approached universal AI like a worm to fly to the moon.

Throughout the years, rhetoric about AI has far outstripped our ability to realize fantastic ideas. And all because for a long time we thought that digital computers and the human brain have a lot in common. But the crown of nature is much more complex and complex than any computer created by man.

Since the early 2000s, IBM has been sponsoring the Blue Brain project, which has the task of creating a working model of the human brain. But in 10 years it was only possible to approach the creation of an artificial counterpart of the rat brain, its neurons and millions of connections between them, which required the computational power of the supercomputer. Given the increasing capacity of supercomputers and the volume of data storage, I would hope that by the end of the 1920s and the beginning of the 30s of this century, we will nevertheless read about a new achievement in the modeling of the human brain.

Meanwhile, all the created robots can be conditionally divided into two groups: those that the operator manages remotely, and those that demonstrate at least some autonomy in limited functionality.

ASIMO – the fruit of 20-year work of scientists from the corporation Honda. In some moments, he can be more abruptly with you. At least, the robot dances well. In addition, ASIMO is able to recognize gestures and respond to them (may wave back to you in response), recognizes moving objects, performs voice commands, reproduces fragments of speech in different languages. But all these possibilities are strictly programmed by a team of scientists. This is a set of small programs that make ASIMO an imitation, which can easily be confused with AI. If you do not go deep.

Of course, over time, the robot will acquire additional functionality, learn without problems to recognize objects falling into the lenses of its cameras, on the fly to distinguish a person from the chair. But all this does not endow the device with thinking and conscious choice. We’re still fucking far from real AI.

While attempts are being made to create full-size robots as independent units, there are scientists who look to the future a little further and develop modular machines. Their concept is that individual modules-cubes will have their own functional, intelligence, and the ability to cooperate with their brethren in more complex and advanced designs. Something like the self-organizing Lego designers who can flexibly adapt to the changing conditions of their work.

Let’s be friends with AI

Looking further, one can fantasize that the evolution of highly intelligent robots will ultimately lead to the acquisition of some kind of consciousness. At this moment they will become the new crown of the evolution of the mind on Earth, into which we intervened with our own hands. More adapted creatures will push us into contact zoos: our “children” will take us to a kind of nursing home and print their own copies, flooding the planet (and maybe the Galaxy) in an endless quest for excellence. But this will no longer be with us.

Of course, we are not so stupid as not to foresee a switch with which all this robotic bacchanal of the supermind can be stopped. Definitely, the more intelligent and independent the AI ​​becomes, the more fuses we will sew it in. And with the slightest suspicion of attaining consciousness, people are not ashamed to turn off too smart representatives of the new branch of evolution.

Some researchers in this field suggest that the entire evolution of artificial intelligence will be built by our hands in the calculation of the creation of “friendly AI.” This term was put into circulation by Eliezer Yudkovsky. He suggests not to impose Azimov’s laws on robots, but to design them in such a way that it would be natural for them to help a person, and not the desire to kill him. But the harsh reality shows that much more financial investments are given to machines for waging wars.

The most logical and probable scenario seems to be, in which we will not sleep in a hat, but we will follow the path of merging with our creatures. Why make someone who is smarter than us, if with the same technology you can turn yourself into a superman. It’s another matter how organically it will be possible to weave computer consciousness with our natural consciousness. Neuroses, schizophrenia and a split personality? It’s possible, we’re just fantasizing.

Who will refuse a personal assistant (much smarter than Siri) and an unlimited store of information in his head? The world is developing neurointerfaces, which will help to improve the hardware performance of our brain. Relatively recently, a group of American scientists reported on the use of a neuroimplant, which improves memory by 15%. The device works as a stimulator, sending electrical impulses to the brain to help it when it tries to save new data. One hundred years, and we will be able to load into the brain of advanced concierges, who will provide us with information, help with solutions and unload from everyday routine.

“Big Data” is a big responsibility

In the meantime, you can dream about how the current voice assistants will become functional, it is better to remember and understand the context of communication with the user, to perform commands specified not only by a formal stylized language, but also by ordinary conversational.

It is likely that such systems will be in demand not only in everyday life, but also in diagnostic medicine at the first stage of the patient’s referral to the doctor. Physicians will be able to conduct a survey, find out where and what the person is hurting and when these pains begin, put a preliminary diagnosis and refer to a specialized specialist. And to help in this will be a huge baggage left by experienced doctors around the world in the form of experience and statistics. A large amount of statistics collected and analyzed in the framework of another phenomenon of the digital age – Big Data.

Annually in the world thousands of works on medical subjects are produced, hundreds of millions of diagnoses are put. The same neural networks can “feed” these large amounts of data (blood tests, ultrasound, X-ray images) in which they can find patterns in the development of pathologies, bringing early diagnosis to a new level. General practitioners will be able to make a much more accurate diagnosis when the neural network on the symptoms and test results can provide the most likely options.

But there is also the reverse side of flirting with “big data”. The senior lecturer of the chair of engineering psychology and ergonomics of BSUIR Daria Parkhomenko told us about it.

– To predict the duration of human life for certain indicators is real now. It is possible to develop this idea to the point that the employer will soon take into account the “digital footprint” or medical indicators. This can be a decisive factor when choosing between two specialists not in favor of someone who has a chronic illness, for example. 

Although in fact the technology does not predict with 100% probability, who will show the best result on the “long distance”. In addition, today there are frightening stories about the future, where everyone is ruled by a social rating system that determines the citizen’s position, assessing his behavior, watching him through video cameras, tracking how he spends money, pays bills, interacts with other people. Such a system is already experienced in China today.You behave yourself – get a decent education, work, conditions for life and vice versa. 

And what if the algorithm is imperfect and crashed? What is the price of a mistake in determining the social rating of a particular person? At the same time, such a future, where medical insurance is insured, gives credits on profiles in social networks and is allowed to study in prestigious universities according to the rating system, creeps closer to us, and it puts a lot of serious ethical challenges to humanity.

From the cradle to the bunks

In addition to forecasting the exchange rate, the value of shares on the exchange, fluctuations in property prices, banking organizations will be able to apply the analysis of “large data” and the programs built on their basis for sorting people for reliability. Yes, today we have credit histories, which in no case can be worsened. But this can over time be added to the study of profiles in social networks.

– Indeed, in the banking sector today, the analysis of “large data”, namely the “digital trace” of the user – his credit history and the information that exists about a person on the Internet and social networks in the public domain is being actively used. This is done in order to prevent risks. Analyzing the behavior of a potential client, you can determine whether he will be a trustworthy creditor. On the one hand, it is good for the bank, on the other – the person becomes less private.

And what if we go further and recall the “thought crime” or “the pre-criminal department” from the film “The Special Opinion” in which the character of Tom Cruise uncovered yet committed murders?

“As they say, you want to know the behavior of a person in the future, study his past.” It is on historical information that neural networks build their predictions of the future behavior of the system or person. I tend to doubt 100% of the accuracy of AI predictions precisely because self-learning algorithms are created by a person. Within the philosophy of technology there is the thesis that when creating tools, technology, technology, a person creates a continuation of himself. I’m wondering if it’s human to make mistakes, can it happen that by creating an algorithm, we’ll put in it the probability of error and accidentally misread the future? In order for neural networks to work as perfectly as they wanted in the film, a lot of time must pass, a unique algorithm must be written.

But who has the most information about us? One might think that our governments have. However, much more about us know the Internet giants like Google, Apple, Microsoft. These guys can be aware of even what is happening in our bed. By indirect evidence, of course. What will happen if such companies with unique technologies can predict the behavior of people around the world and manipulate them or dispose of this knowledge for the benefit?

– This scenario is quite likely. In the society there is already digital inequality. There are a lot of technologies from the same Google and Yandex, which are aimed at analysis and forecast. It is logical that the more resources you have, the more sophisticated your technology and the more accurate your forecasts are. 

In our society, the benefits are often one of the most important motivations. Moreover, it is often a matter of material gain. The question is only in what way it will be achieved or, in the worst case, by what kind of victims.

We can say with confidence that today an intelligent digital layer is being formed. The digital divide is evident in the example of the ATM – near him even today people can sometimes be found who do not know how to use it, are afraid to make a mistake and ask for help from the bank’s employees. What can we say about much more complex technologies? 

The human community is hierarchical. “Who owns the information, owns the world” is a phrase from the last millennium. History continues to go in a spiral and today shows us the formation of a possible dictate of technology.Especially if you remove all ethics from the calculation.

 The main thing is to remain human in this hell

The ethical issues of the introduction of new technologies are something that we should negotiate on this coast before sailing out of the digital age in the era of cars that are smarter than humans.

– By creating robots, algorithms, technologies, a person first of all should think about how to make them useful to other people. At the same time, it is important to understand the issues related to personal data. And both at the level of legislation, and at the personal level, raising their awareness in matters of “digital hygiene.” 

Most people today are aware of the “digital footprint” we are leaving, so far very low. However, there are positive trends. For example, now the law on the protection of personal data is being prepared. He will regulate the treatment of personal data of individuals. It is important that those specialists who create and develop technologies better understand their influence on people and the consequences that may arise. 

Today, in my opinion, there is a need for people who are engaged in such serious things to have good humanitarian training. At least they had systemic knowledge of philosophy, psychology and sociology. After not knowing how to correctly set the task before AI, creating powerful tools without love and care for a person, about his safety, you can bring great harm, even with good intentions.

Back to top button