From History to AI: Learning from the Past to Navigate the Generative AI Revolution

From History to AI: Learning from the Past to Navigate the Generative AI Revolution

From History to AI: Learning from the Past to Navigate the Generative AI Revolution

The buzz around generative AI is immense, forecasting significant changes in various sectors – from the way office work is done, to lifestyle alterations, wealth disparity, the evolution of work, and even educational reforms. At Primoris Technologies, while I hesitate to lay down specific forecasts, I acknowledge the inevitable influences of generative AI.

Preparation is key, and certain insights offer valuable guidance. Understanding the potential trajectory of this innovative technology is crucial, particularly for those shaping data strategies, AI capabilities, and tech transformations in government sectors. It’s not just about how government bodies evolve as mission-driven entities, but also about recognizing their profound influence on citizens in the midst of this technological revolution.

As we venture into this new era, Primoris Technologies remains committed to exploring and adapting to these transformations, ensuring that our innovations and strategies stay ahead of the curve.

Formulating a revolution

Let’s delve into a broad framework that illustrates how technology paves the way for revolutions, and then connect this to the world of general artificial intelligence (Gen AI).

The formula is straightforward yet powerful:
Infrastructure + Products = A Revolution in Field X

For a ground-breaking innovation, we need an accessible infrastructure to support the core technology. Additionally, products must be developed to meet specific needs or use cases. These two elements together make an innovation universally usable and affordable, sparking a revolution. In other words, a combination of specialized products and extensive infrastructure is necessary for any innovation to make a significant global impact.

To understand this better, let’s look at some historical examples:

Electricity:

[Electric Grid] + [Electric Consumer Products] = A Superior Form of Energy Transfer (as opposed to coal or wood)

The electric grid, in conjunction with electrically-powered products like lights and computers, provided a novel and efficient means to transfer energy, thereby revolutionizing our world.

Internet:

Digital Telecommunications Network Combined with Advanced Software and Hardware Facilitates Superior Data Transfer

A blend of a digital telecommunications network and products utilizing data heralded a transformative method of transferring information, outperforming traditional methods like paper or fax. Initially, this innovative approach capitalized on pre-existing telco networks. This fundamental concept is not just confined to digital communication; it’s a principle that underpins various ground breaking technologies, including the combustion engine, currency, and the printing press.

How might AI/ML manifest in a model where data is likened to infrastructure and algorithms to products?

In this analogy, the pivotal element, the X factor, could be envisaged as enhanced functionalities. These are akin to more sophisticated and precise probabilistic models of reality. This concept isn’t entirely novel – we’ve already created models for economies, financial trends, businesses, and even sports like golf. Physics itself is a mathematical representation of reality. The intriguing question arises: what if these models could be constructed accurately and efficiently with minimal data? Imagine a world where complex statistical and modelling skills aren’t prerequisites for everyone. In such a world, dieticians could craft optimal nutrition plans tailored to individual health needs, and educators could develop optimized learning trajectories for students. Additionally, the sharing of individual functions and results could amplify a collective network effect.

This widespread implementation of algorithmic thinking could spark a societal revolution. Consider current roles where humans essentially execute complex functions, such as in medical diagnostics or financial advising. Imagine a society where creating, customizing, and sharing these functions becomes straightforward.

While unpacking the AI revolution in this light offers much to consider, one aspect stands out in my work with governments: redefining the concept of data as infrastructure, alongside traditional notions like cloud infrastructure. Our strategy sessions often revolve around understanding the broader implications of this viewpoint, focusing on designing a data infrastructure not just for today’s known products but also for those yet to emerge. It’s about progressing from initial steps to a full stride in this journey.

Language reflects ourselves

The escalation of disinformation campaigns is a significant concern, especially as we approach major elections globally. With advancements in AI, the creation of increasingly convincing fabrications becomes a real possibility. The rise in the number of entities exploiting disinformation is alarming, and the situation is expected to deteriorate further, partly due to the influence of large language models (LLMs).

Although hesitant to speculate on AI’s future impact, I suspect that somewhere, a hostile nation-state is exploring how to use LLMs to intensify disinformation efforts. Instead of generating fake news, they are utilizing these models for what they inherently are: probabilistic mirrors of society.

Take GPT-4, for example. This complex statistical model mirrors the vast array of data it was trained on, including internet content, textbooks, and social media. Its effectiveness in generating responses lies in its reflection of societal characteristics. Reminiscent of a quote from R.F. Kuang’s novel “Babel,” languages shape our worldview and, by extension, LLMs, rooted in language, also offer perspectives on the world.

Significant research supports this. LLMs have been applied in economics to model individual behaviors and decisions. In politics, they’ve been used to anticipate reactions to contentious issues. One study even involved feeding an LLM specific media content to gauge public opinion trends.

Earlier, I touched on the democratization of these technologies. Now, consider the implications of a complex function like an LLM, which embodies a society’s digital footprint. What insights can it offer about a society? What unknown aspects of ourselves might it reveal?

Opportunities and threats

When considering Large Language Models (LLMs), the perspective need not be pessimistic. A SWOT analysis, which pairs opportunities and threats, highlights that they exist simultaneously. LLMs hold immense promise for beneficial global impact. For instance, governments can use these models to simulate and evaluate the societal effects of policies before actual implementation. This ability extends to testing new laws and government programs for unintended negative consequences. Intelligence agencies might leverage LLMs for enhancing national security.

The development of GPT-4, costing $100 million, raises a question: Would the U.S. intelligence community invest a similar amount for an accurate simulation of foreign decision-making processes, or for modeling interactions between key global players?

As general AI models become more widespread, there’s a risk of ‘regression to the mean,’ where prolonged AI use centers around average outcomes. This could lead to homogenization in creative outputs like music, literature, and film. Although commercialism already influences this trend, LLMs might amplify it, potentially leading to a loss of unique, serendipitous creations and diminishing cultural diversity. This is a critical issue for policymakers to contemplate.

Nevertheless, the profound insights provided by LLMs could enhance mutual understanding among people. Despite various risks, I am optimistic that these models will reveal our similarities and pave the way for cooperative endeavors between governments and communities worldwide.

Moving beyond LLMs

Generative AI has captivated audiences globally with outputs that strikingly resemble human creativity – be it in conversation, writing, music, or visuals. These AI interactions often feel remarkably real and genuine, sometimes even unexpectedly amusing or charming. Beyond just mimicking human interaction, generative AI encompasses a range of models suited for various analytical and business purposes, distinct from Large Language Models (LLMs). Let’s explore, in terms understandable to executives, some examples and potential business applications.

To grasp how these generative AI models’ function, it’s essential to comprehend the basics of a generative algorithm. In simple terms, a prompt is converted into numerical input, which is then fed into an algorithm. This process resembles basic algebra from sixth grade, where we input x into a function to solve for y. The critical difference lies in the complexity required for the output y, especially for intricate results like a generated image; both the function and the input x must be highly sophisticated.

Understanding GANs and VAEs

How do algorithms transform our input into something understandable? This is the essence of training specific models. Let’s delve into two types of generative models: generative adversarial networks (GANs) and variational autoencoders (VAEs).

GANs operate on a unique adversarial principle, involving two competing neural network models. The first model generates output that mimics real data. The second model, known as the discriminator, distinguishes between real and fake data. It receives both real data and the first model’s fake data. Through continuous training, both models improve until the discriminator can’t differentiate fake from real data. Once this occurs, the first model is adept at producing highly realistic data, ready for use in generative AI applications.

VAEs consist of two models with distinct roles. The first model compresses a large amount of data into a simpler numerical form (this process is known as encoding, hence the term “autoencoder”). These numbers are then systematically organized. The second model uses these numbers to try and recreate the original data as accurately as possible. It’s akin to dehydrating food and then rehydrating it, aiming for a close resemblance to the original. Successful training occurs when the second model excels in this reconstruction task. The key lies in the orderly organization of the simplified numbers during training, ensuring that our inputs yield logical outputs, mirroring the structure of the original data.

Using AI insights to solve real-world problems

Let’s take a practical example. I recently enjoyed creating a GAN (Generative Adversarial Network) focused on whiskey profiles. I gathered numerous whiskey reviews from the internet, transformed them into flavor profiles, and then used this data to train the GAN. The result? The ability to generate 1,000 distinctive whiskey profiles that a master distiller might realistically consider.

And the application of this generated data? I delved into it for insights to refine my home whiskey aging methods. It’s akin to consulting a vast array of master distillers for their expertise on flavor development.

Now, imagine applying this approach to global governmental challenges. Consider these scenarios where the right data and model training could provide crucial insights:

  • Banking, Financial Regulation, and Anti-Money Laundering: What new money laundering techniques might emerge? Could we simulate 10,000 financial statements to uncover potential money laundering risks? What insights could this reveal?
  • Military and Transportation Planning: What unique logistic plans could meet our objectives? By analyzing a broad range of logistical options that all fulfill our mission, could we uncover new trade-offs and considerations?
  • Central Banking Policies: How might different fiscal policies influence bank stability, especially with changes in interest rate targets? By simulating various banking outcomes in response to monetary policy shifts, could we detect unexpected risks and effects?
  • Counterintelligence Strategies: What unforeseen behavioral patterns could suggest intelligence gathering activities? Is it possible to identify unused or unknown collection methods, or to uncover previously unrecognized sources?

By harnessing the power of GANs and other advanced models, we can explore these questions and unlock new perspectives in various critical sectors.

AI is out of the barn

Generative AI extends far beyond just LLMs. In our recent exploration, we delved into a broad framework to gear up for the impending AI evolution, examined the extensive capabilities of LLMs, and ventured into various other generative AI technologies. Let me share a final thought, particularly relevant to global governments and the entities they govern.

We are advancing towards a time when AI consultation becomes essential in decision-making. This isn’t because AI will always be correct, but because soon, it might be considered negligent to make decisions without considering AI insights or relevant data-driven models of reality. Innovation is irreversible. To illustrate, ChatGPT provided me with 14 sayings that encapsulate this concept, including a particularly apt one I wasn’t familiar with: “The horse is out of the barn.”

Leave a Reply