It took millions of years for the first bot to launch but less than a century for them to outnumber their human creators. Join us on a journey through recent history to discover their humble beginnings and the power they exert today. There are hundreds of points along the bot timeline worth mentioning, but we’ve highlighted a few across each decade to help orient you to their evolution.
While the internet was unavailable, bots worked on local machines or intranets to automate tasks like message filtering or to simulate conversation as rudimentary chatbots.
ELIZA was released in the mid-1960s. One of the first notable chatbots, it simulated conversation with early natural language processing techniques. ELIZA’s primary purpose was not to understand or provide meaningful responses but to engage users in a conversation using simple pattern-matching techniques. It parsed user input and transformed it into appropriate prompts or questions using predefined patterns and templates. ELIZA would then reflect these prompts to the user, creating the illusion of a conversation.
USENET is an early online discussion board that debuted in the late 1970s. While many of its services functioned on human operation only, USENET developed scripts to automate administrative tasks such as filtering out spam or unwanted content, managing newsgroups, and maintenance. USENET did not employ bots, but task automation like this provided use cases that today’s bots build upon.
Malicious bots were not deployed at this stage in the history of bots because the internet had yet to launch (and nobody wanted to attack themself).
On January 1, 1983, the Internet launched. As it took shape, so too did the earliest internet bots.
Internet Relay Chat (IRC) bots were launched in 1988. They automated various functions within IRC channels, such as managing user lists, performing searches, and providing services like weather updates or game scores.
Malicious bots had yet to launch at this period, but we saw the first examples of malicious software hit the web like the Morris Worm (1988).
The emergence of Web 1.0 led to the development of web crawlers to index and serve the influx of uploaded content and the continued sophistication of new chatbots now servable via the internet. Additionally, the first botnets foreshadowed the internet’s turbulent future with malicious bots.
WebCrawler is one of the first search engines to utilize bots for web page indexing. Launched in April 1994, WebCrawler quickly gained popularity as one of the first search engines to provide full-text indexing and searching capabilities for the web. It utilized a web crawler bot to collect information from web pages and build an index for search queries. WebCrawler played a significant role in the early development of web search because it created a stage for future search engines to build upon.
The once popular chatbot, ALICE, launched in 1995 and showcased advancements in simulating human-like conversation. Two decades after predecessors like ELIZA, ALICE (also known as Alicebot or AIML (Artificial Intelligence Markup Language) Bot) covered numerous topics and conversation scenarios to simulate human interaction. It utilized an extensive database of AIML patterns and corresponding responses to match user inputs and provided answers using predefined statements or templates. The capabilities of ALICE were recognized in 2000 & 2001 when it won the Loebner Prize, an annual competition for chatbot intelligence.
In 1996 Google introduced its web crawler bot Googlebot, significantly impacting web search and indexing. Originally called Backrub, Googlebot collects data from web pages, analyzes the content, and sends it back to Google's servers. This data is then processed and indexed, allowing Google's search algorithm to optimize users’ search results. Over the years, Googlebot has evolved to handle the ever-expanding nature of the web and the increasing complexity of web technologies. It adheres to specific guidelines and policies, respecting website owners' instructions through mechanisms like robots.txt files and respecting web standards.
Created in 1999, PrettyPark and Sub7 are some of the first recorded botnets. PrettyPark was a worm that spread through email attachments and stole information from the host computer, such as instant messaging login names, passwords, and telephone numbers. Sub7 is a trojan that stole information through keylogging on infected computers. It also collected audio and video if the host had a microphone or video recording hardware.
Chatbots emerged to serve the boom of instant messaging usage on the web, and botmasters of all ages launched volumetric bot attacks at a scale never seen before.
ActiveBuddy (later acquired by Microsoft) launched SmarterChild in 2001. SmarterChild is an AI-powered chatbot for AOL Instant Messenger (AIM) and later expanded to other instant messaging products like MSN Messenger and Yahoo Messenger. Users could engage in real-time conversations with the bot, asking questions, requesting information, or even playing games. The bot utilized natural language processing (NLP) techniques to understand user inputs and generate appropriate responses.
In 2000, Mafiaboy, a 15-year-old hacker, brought down CNN, Dell, E-Trade, eBay, and Yahoo! (the largest search engine at the time) after launching DDOS attacks from a botnet of compromised college computer networks. The FBI and Canadian police department later caught the hacker and charged them with 66 counts of mischief that resulted in $1.1 billion of damages.
In 2007, the Storm Botnet rose to prominence. Leveraging one of the largest bot networks in history at approximately 2 million computers, estimates suggest that Storm’s spambots were responsible for nearly 20% of all spam on the internet at its peak.
The development and integration of AI technologies allowed more sophisticated conversational bots to emerge and develop on social media platforms like Facebook (now Meta). Bot management tools hit the market to combat the attacks on organizations from botnets like Mirai and 3ve.
Siri, a conversational AI chatbot, was introduced by Apple in 2011. Siri brought voice-activated virtual assistants into the mainstream. Its use of natural language processing (NLP) and machine learning techniques enables it to understand and respond to user queries and commands for information or assistance on topics like information retrieval, navigation, multimedia control, and more.
In April 2016, Facebook (now Meta) opened its Messenger platform to developers, enabling them to build chatbots and integrate them into the messaging experience. The tools, resources, and APIs gave way to new chatbot features like quick replies, persistent menus, reviews, and the ability to handle payments, among others. The platform also marked a significant step in expanding the capabilities of Messenger beyond simple person-to-person communication.
In 2012, the Carna botnet performed what’s known as the “Internet Census of 2012”. While we often see botnets as mediums for only malicious bot attacks, this is an instance of a malicious botnet operating illegally to provide legitimate benefits to society. Like your favorite antihero, the Carna botnet broke the law for the greater good, in this case, to generate public insights on the general state of the internet. Through the creation of its botnet, it found that hundreds of thousands of routers lacked even basic security and that of the 3.4 billion IP addresses available, only 1.3 billion showed any signs of usage at the time. The data uncovered by the Carna botnet led to increased security awareness, improved security practices, and likely vulnerability patching where possible.
**Carna’s 2012 visualization of IPv4 usage layered against the day and night cycle**
In late 2016, the Mirai botnet made its rise to infamy. Known for some of the most significant Distributed Denial of Service attacks (DDOS), the Mirai botnet used volumetric attacks that exceeded one terabyte per second to bring down some of the most powerful internet infrastructure and services providers, including OVH and Dyn. For nearly a day, companies that leveraged their services, including Reddit, Spotify, and Github, were inaccessible while practitioners combat the attacks.
For nearly half of this decade, the 3ve click fraud botnet was imitating clicks on websites across the globe. Estimated to have brought the hackers $30 in revenue over its lifespan of 2013-2018, joint efforts of the FBI, Google, Amazon, ESET, Adobe, and Malwarebytes dismantled the botnet. They found eight perpetrators, with whom they opened 13 criminal cases.
Bots have become more prevalent than ever before. Conversational chatbots went viral with the debut of GPT-3, and we’ve seen several new social-media bot platforms created to help users develop bots to customize their experiences. Malicious bot attacks are more prominent than ever and show no signs of stopping.
OpenAI unveiled GPT-3 in 2020. It represented a significant milestone in Conversational AI language models for its ability to generate human-like text. For many in the non-technical demographic, its widespread virality gave legitimacy to the capabilities of conversational bots. Not only is it capable of answering questions in a human-like manner, but it also navigates ethical concerns in responses and much more.
Discord and Twitter introduced developer platforms for creating bots, enabling automated interactions, and enhancing user experiences on their respective platforms. They opened up opportunities for developers to create innovative bots that improve user experiences and automate interactions. Bots have become an integral part of these platforms, providing functionalities beyond what is offered by default and enabling users to tailor experiences to their preferences.
In June 2022, Google announced they mitigated the biggest DDOS attack ever. The hackers used over 5,000 IPs from 132 countries to launch attacks for over an hour. Recorded at 46 million requests per second at its peak, the attack was the equivalent of “receiving all the daily requests to Wikipedia (one of the ten most trafficked sites in the world) in just ten seconds”, Google’s team noted.
Over the last half-century, bots have matured from concept to actualized and now optimized pieces of software. While their technological advancements brought welcome additions to our lives, they’ve also enabled some of the most dangerous cyber threats. Time will tell how bots develop, but we can expect that artificial intelligence and machine learning will act as a launching pad for new variants. If you need help managing bots from any decade, Fastly can help.