When most people hear the term robots, they think of sentient (albeit mechanical) beings. This perception perpetuates the belief that robots will soon be able to take their place among humans as equals. The blending of robotics and artificial intelligence (AI) has resulted in some interesting inventions, but none of them are cognitive, nor do they have a sense of self. They are mechanical devices, some of which people are trying to make appear human. In fact, there is a whole body of study dedicated to trying to make robots look more human.
However, here the discussion is about software robots, or simply bots. Bots live in the cloud or within what are perceived today as private computers. Bots are leveraged on the Internet for a wide range of nefarious activities. Originally, bots were used to distort the reviews of products sold online. Bots would give a product 5 stars—or 1 star—and then write a fake review. It is now recognized that many product reviews are compromised and likely show little of the reality of the consumer perspective.
Although product reviews can hurt (or improve) sales, bots today are leveraged on the Internet to perform more sophisticated tasks. Bots are now complemented with AI techniques. This allows them to proactively identify target vectors and attack without the need for human intervention. This use case could be an article unto itself.
The power of this technology allows 1 individual to create bots that can number into the hundreds and sometimes thousands. Hundreds of bots performing the same task can introduce bias into a data set. For example, consider product reviews on storefront websites. These are attacked so heavily in some parts of the world that the functionality to leave a review has been removed altogether. Historically, there has been a belief that large data sets cannot be compromised because the deep learning algorithm collects a data set containing millions of records. However, 10,000 bots operated by 10 people can distort a data set very easily. Compromising crowdsourcing logically leads to the distortion of large data applications such as AI techniques (e.g., learning and deep learning algorithms). If a hacker is targeting a deep learning application, they can use bots to insert data patterns that allow the hacker to avoid detection.
Another disconcerting use of bots is misinformation campaigns. Bots are created and paired with social media and corporate accounts that are populated with information to make the bots appear human. They seemingly have families, addresses, phone numbers, personal interests and friends—the lack of which has historically been used to identify bots. Once the bot is established, it can be used to perpetuate misinformation campaigns. Attacking executive leaders through their social media or enterprise accounts can affect the individual’s perspective, thus impacting their decisions for their organization. This could seriously affect the value of the firm and its associated marketplace.
Bots have also been used to conduct distributed denial-of-service (DDoS) attacks against organizations and their service providers. When existing valid accounts are infected, they are referred to as “zombies.” DDoS attacks were one of the first uses of zombie computers as bots. Zombie accounts and computers reduce costs for the hacker since the resources already exist and can be leveraged.
Cybersecurity professionals are continually confronted with bot attacks on organizations’ reputations and infrastructures. There are several due diligence steps that can be taken to stay vigilant:
- Staff must be properly educated so that they can help protect the organization from bots. The level of sophistication and interaction a bot can possess is unsettling from a cybersecurity perspective.
- It is not only large financial institutions, but organizations across all sectors that are now susceptible to attacks by bots. Previously, the decision as to who got attacked was a question of how many resources were available. That is no longer the case. All staff must do their part to ensure that the organization’s infrastructure is secure and safe. User accounts, both professional and personal, should be reviewed regularly. If they are no longer used, they should be deleted since unused accounts can often serve as attack vectors. Bots are prone to creating zombie bots from once-valid accounts.
- A proactive publicly available information (PAI) monitoring program must be implemented on websites where people can leave reviews for the organization as customers or employees. Negative reviews can lead to negative impacts on revenue. Enterprise websites (and the third-party websites they reference) should also be monitored.
- If an enterprise is developing big data solutions using PAI or other large data sources, the origins of those data need to be validated. In addition, the sources used to collect the data need to be as diverse as possible to minimize bias.
Although robots are being used for good, that perspective can be lost in the face of the abundance of nefarious activities in which robots (and bots) are being applied. Writer and professor of biochemistry Isaac Asimov’s first law of robotics states “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The second and third laws play off the first.
I can assure you that Asimov’s vision is not being considered. Although governments are using robots to fire weapons and control all aspects of war, such applications are beyond the scope of this discussion. It is up to cybersecurity professionals to keep staff informed through security training and protect infrastructures to mitigate the power of bots.
Bruce R. Wilkins, CISA, CRISC, CISM, CGEIT, CISSP
Is an independent consultant working in the science and technology and advanced concepts domains. He provides secure innovation in an ever-changing technological topology, ranging from medical and communications innovation to innovation of advanced military applications requiring a wide range of compliance and security engineering solutions.