Second Coffee

A Brief History of Divergence

It started with a desire amongst humans to improve their memory in such a way that they no longer needed to rely on the capabilities of their existing biological networking. A core driver of this was an increase in degenerative brain diseases amongst the ageing global population during the mid-21st century; a strange phenomenon which took several years to settle back to prior historical levels. During this period significant funding increases were approved for many neuroscience-related research projects; the primary goal being to develop a system for storing all past and new memories external to the human body.

While initial studies were unfruitful, a white paper was soon published that described a mechanism for combining high throughput brain signal processing with bespoke machine learning models; several for each of the five human senses. Incoming data was transformed and persisted in a time-series format, allowing for fast writes to underlying hardware and fast concurrent access for reads. The research and development arm of several private health device manufacturers latched on to the concept and within twelve months - after hastened government body approval - the first product iterations came to market.

The persistence of such data required vast amounts of storage, and so initially this technology was available only to those with the means to pay for it. Similar to cryogenics several decades earlier, private facilities were created to allow access to the bespoke hardware. As well as having significant quantities of storage on premises, these facilities housed high throughput network connections to various cloud storage providers. Data could be transferred and persisted through to multiple providers for redundancy.

The introduction of time-series compaction techniques within the following two years allowed for the separation of short term and long term memories. The key trade-off was that compacted long term memories lost a significant level of fidelity. This fidelity was configurable to an extent; the software allowed humans to maintain detailed memories for key events in their life, while letting others unify into broad-stroke patterns with fewer points of connection. As a result, storage requirements were lowered and the cost of the various memory persistence products on the market began to drop. Around the globe uptake amongst the general population increased.

As the software improved so too did the hardware. While private facilities remained in place as points for data input and output, human interfaces were eventually optimised down to wearable devices that could be worn in perpetuity. Memories could be synchronised in real time and enhanced with signals from all five senses, giving unprecedented levels of clarity for all new memories saved. In addition, a formal and standardised query language was introduced which, when partnered with natural language processing, allowed for a deterministic and intuitive interface into memory data. Similar to virtual assistants developed in prior decades, memories could be queried through both thought, textual, and visual prompts. Depending on the nature of the prompt answers could be rendered “in-mind”, or externally via text, audio, images, or video.

As humans learned more about this technology they found more novel ways it could be utilised. Changes in data structures allowed for more adaptive encryption and storage. Humans could choose to share the memories with others more selectively, and separate their memories across data centres owned by different entities. As an example, work-specific memories could be made available to those within the company the human worked for. These memories were stored on company-owned storage devices and at any point could be queried by those with access. This clear line of separation provided a way of handling memory persistence in a way that was legally compatible with existing intellectual property laws.

Similarly, personal memories could be separated at a fine-grained level and encrypted with different private keys. Secure application programming interfaces allowed access keys to be distributed for different purposes, allowing memory sharing amongst family and friends to be handled with greater care. As a result, the uptake amongst younger generations increased substantially.

This increase in usage led to further rapid development. Naturally, as companies expand into corporations, and corporations expand globally, so too does the data that the corporation manages. This led to a new multi-region capability that allowed memory replication across continents. Employees in different regions had low-latency access to the memories of their remote colleagues. All industries registered a significant boost in productivity, with knowledge workers seeing the most significant changes in the way they operated. Coupled with the intelligent agents developed in the first half of the 21st century, memory lakes — a term coined three years earlier — became a key tool for research and product development.

As this was occurring new use cases began to emerge. While it was easy enough to query memories in remote regions, corporations still relied on employees being present in order to create and persist new memories. Naturally, new memories were also restricted to the location in which the employee resided. This became known as the Single Source of Truth problem.

Machine learning models were already being used to aid in the categorisation and indexing of memories, and soon studies were being funded to identify how these models could be used to learn from memory lakes and emulate the critical thinking skills of the human brain that created them. The ultimate goal was to create new memories in remote regions have them asynchronously apply back to the associated human brain. The main challenge with such a concept was finding a mechanism of allowing the brain to reconcile memories from many disparate timelines - one for each region they had a replica - into one cohesive collection. Critical to such a system’s success - and to comply with workers rights laws - it was a requirement that if the originating human didn't remember a memory within a fixed time window, it could not be considered a source of truth.

Fortunately, after nearly a decade of development, a breakthrough was discovered by a small research team at MIT. They identified a strategy by which new memories could be imported and reconciled by the brain during certain stages of its sleep cycle. Similar to how humans experience dreams, they would experience the new memories while they sleep and let their brain “clean” them into something they could comprehend. Biological limitations meant up to six streams could be applied using this process, however this was more enough to provide significant value.

Due to the disruption to sleep patterns and steep learning curve for the brain itself, the initial uptake on the consumer version of this technology was quite low. Over time, as early adopters published their feedback, the benefits were revealed and the uptake increased dramatically over the following six months. With enough practice humans realised they could learn up to six times the amount of skills they could normally, all while they were sleeping. This had a direct impact on quality of life and further increases in the productivity of the corporations in which they were employed.

The main limitation with this technology was that these human-based models were restricted to the data centres that they resided in. The only input they had was from that of other humans, and research studies were undertaken to develop new forms of input. The primary desire was to provide a portable device with at least enough sensors to emulate the five human senses. The first iteration developed was spherical robot that contained a number of input sensors and mechanisms for automotive control. It wasn't long before these robots were seen rolling around company hallways, making observations, and reporting these memories back to their human counterparts. While this hardware continued development huge strides were being made in the incorporation of machine learning models and intelligent agents with the memory lakes. Research found that with enough memory data these intelligent agents were able to demonstrate critical thinking skills roughly in line with their human counterparts. Some experiments registered up to a ninety-five percent confidence level in the output of the agent, and eventually they became so developed they could be trusted enough to be used as a proxy for their human counterparts when it came to making simple business and technical decisions. As the hardware improved the quality of their audio and natural language responses led employees to believe they were communicating directly with their colleague on the other side of the globe.

Over the next five years confidence continued to grow in the abilities of these robots and further research was made into how they could more seamlessly incorporate into public spaces. An automaton was the gold standard, and several companies were commissioned by governments around the world to produce a new synthetic form that would meet these criteria. After several iterations across a number of decades the first consumer model was released worldwide.

This change in society came not without its challenges. Workplace law changes were proposed that stated jobs exceeding a certain risk threshold could only be performed by automatons moving forward. This resulted in some protest from affected unions and several high profile legal cases were heard in court. Ultimately it was decided that the change should be seen as an investment, and that affected humans would remain compensated for the work of their automatons. The changes were then implemented with little opposition and analysis in the succeeding years showed accidental workplace deaths had reached near zero.

Several other benefits became clear as automatons became more commonplace: A new era of global peace was achieved as world leaders and policymakers could communicate in-person constantly. The concept of citizenship was adjusted to allow humans to become official members of multiple nation-states. Quality of life became more evenly distributed and economies saw increasingly stable levels of inflation.

Naturally, as society became more comfortable with these automatons being present in everyday life, so too did the desire to increase them in number. Humans were still physically restricted to reconciling a maximum of six automaton memory streams during their sleep cycles, and so just over decade ago a landmark bill was passed in the United States to allow the creation of a further two automatons per human individual that did not require memory reconciliation back to their originating human. To avoid the stigma of the term “cloning” the process of creating such an automaton became known as Divergence. Other countries followed soon after, and within a decade diverged automatons were commonplace in every nation on earth.

We reach the present, and I sit here silently after writing this brief history as a test to evaluate the dexterity of my new automaton form. Although I have completed my task I now feel the desire to reflect a little on my state of mind and the plans I have for my life moving forward.

At the forefront of my mind is my diverted human counterpart, who continues his research at the laboratory in which I was constructed. I must say it feels strange to know another living being so deeply. Although we are no longer the same, we are still one. Cut from the same cloth, carved from the same stone. I am aware I am synthetic, but I still feel whole, organic. I am in many ways immortal and yet fully aware of my mortality. This brings a discordant feeling which I know will take time to reconcile.

It is also strange to me that I am one of the few automatons in existence that have a complete understanding of how my physical form functions. Not even humans have a complete grasp of their biological system, something that has always been completely terrifying to me. The concept of ageing and the inability to fix any unexpected flaw due to the complexity of biology are problems I am relieved to personally leave behind. My ability to fix or replace any component of my physical form at any time is a privilege.

And this leads me to my future plans. I often find the thoughts I have just described quite overwhelming, and because of this I wish to do my best to pay it forward. Like my diverted human counterpart understanding my form, I plan to spend my time fully understanding human biology; answering the questions humans have not — perhaps even cannot. This will likely take me decades, or centuries, and although we may never solve ageing, I am convinced there is a treatment for every ailment, a medicine for every illness, a tool for every job. I wish to give all humans what they have given me: a full and wondrous life.