Your Cart
Loading

AI Invention

Language is where generative AI has the most potential.


I discuss the broad scope of artificial intelligence in my writing.


Nov 7 2022


Format each of your headings below to Heading 2 to keep your post neat and SEO-friendly.


Today, generative AI is the subject of a loud buzz.


Artificial intelligence (AI) that can create new content, as opposed to just evaluating or acting on pre-existing data, is known as generative AI. There is currently no technology-related topic generating more buzz and interest.


Text-to-image AI has been the blazing hot heart of today's generative AI mania. Based on straightforward written inputs, text-to-image AI models create original, detailed visuals. The most well-known of these models are Stable Diffusion, Midjourney, and OpenAI's DALL-E


These text-to-image AI models' sudden emergence over the summer was what sparked the current generative AI frenzy, which has resulted in billion-dollar funding rounds for fledgling startups, extravagant company launch parties, nonstop media coverage, and waves of businesspeople and venture capitalists hastily rebranding themselves as AI-focused.


It seems logical that text to image The public's interest in artificial intelligence has been particularly piqued by AI. Images are perfect for going viral since they are visually appealing, simple to consume, and entertaining to share.



Additionally, text to image AI is a very potent piece of technology. These models are capable of creating photos that are breath-taking in their originality and sophistication. The enormous promise of text-to-image AI has been covered in earlier columns in this column, both last month and early in 2021. Advertising, gaming, and filmmaking are just a few of the industries that image-generating AI will change.


The ability of machines to create written and spoken language will prove to be significantly more transformational than the ability of machines to create visual information.


Humanity's most significant invention is language. It is what distinguishes us from every other species on the world more than anything else. We can reason abstractly, create sophisticated concepts about the world and how it may be, convey these ideas to one another, and expand upon them across generations and geographical boundaries thanks to language. Without language, almost little of modern society would be conceivable.


Graydon Hoare persuasively argues the many benefits of text over other data modalities in the timeless 2014 blog post "Always Bet On Text." Text is the most adaptable communication technology, the most durable, the cheapest, and most efficient; it is the most useful and versatile in social contexts; it can convey ideas with a precisely controlled level of precision and ambiguity; it can be indexed, searched, corrected, summarized, filtered, quoted, and translated. The fact that all forms of literature and poetry, history and philosophy, mathematics, logic, programming, and engineering rely on textual encodings for their concepts is not a coincidence, according to Hoare.


Language is used in every sector of the economy, every enterprise, and in every transaction. Society and the economy would stall out without language.


Thus, the ability to automate language opens up hitherto unheard-of possibilities for value creation. The way that every company in every industry operates will change as a result of AI-generated language, as opposed to text-to-image AI, whose effects will be felt most sharply in a few areas.


Let's go over a few sample applications to show the scope and depth of the upcoming transformation.


Selling to science


In terms of commercial adoption, copywriting has emerged as the first real "killer application" for generative text, including AI-generated web copy, social media postings, blog articles, and other written marketing content.


Over the past year, AI-powered copywriting has experienced astounding revenue growth. One of the top businesses in this field, Jasper, only became operational 18 months ago but is already expected to generate $75 million in sales this year, making it one of the fastest-growing software startups in history. Jasper just disclosed a $125 million fund round, giving the company a $1.5 billion valuation. Unsurprisingly, a plethora of rivals have appeared to compete for this market.


However, copywriting is just the start.


Large language models are ready to be used to automate a number of components of the larger marketing and sales stack (LLMs). Expect to see generative AI products that, among other things, automate outbound emails from sales development representatives (SDRs), accurately respond to inquiries from potential customers about the product, handle email correspondence with potential customers as they move through the sales funnel, give human sales agents on the phone real-time coaching and feedback, summarize sales discussions and suggest next steps, and more. Human sales professionals will be freed up to concentrate on the distinctively human components of selling, such as client empathy and relationship development, as more of the sales process gets automated.


Generative AI will significantly automate contract drafting in the legal industry. LLM-powered software solutions will eventually handle a large portion of the back-and-forth between legal teams on deal documents by understanding each client's unique objectives and preferences and automatically hashing out the language in transaction agreements in accordance. For businesses of all sizes, post-signing, generative AI technologies will substantially ease contract management.


Legal research, discovery, and other aspects of the litigation process will be transformed as a result of language models' powerful ability to summarize and respond to questions concerning text documents.


Generative language models will aid clinicians in the creation of medical notes. They will answer inquiries regarding a patient's medical background and describe electronic health records. They will facilitate the automation of time-consuming administrative procedures like revenue cycle processing of insurance claims, management, and previous approvals. They will soon be able to provide diagnosis and treatment plans for specific individuals by fusing a thorough knowledge of the body of research literature with the unique biomarkers and symptoms of a certain patient.


Generative AI will revolutionize customer service and call centers in a variety of sectors, including hotels, e-commerce, healthcare, and financial services. The internal IT and HR helpdesks operate similarly.


Many tasks that are performed before, during, and after customer service encounters, such as in-call agent mentoring and post-call documentation and summarizing, can already be automated using language models. They will soon be able to manage the majority of customer support interactions end-to-end without the need for a human, and not in the stiff, fragile, rules-based ways that currently exist. In fluent natural language that is practically indistinguishable from a human agent, it operates in the same way as automated call centers have for years.


Simply put, almost all interactions you as a customer will need to have with a brand or corporation, on any subject, may and will be automated.


Generative language models will change the way we manage structured data, an essential business activity at the core of most firms. Language models are very efficient at accomplishing various data cleaning and integration tasks, according to recent research from Stanford.



Even though they weren't trained for these tasks, they can do entity matching, error detection, and data imputation. An entertaining demo that was recently shared on Twitter gives some indication of how generative AI will change the way we use tools like Microsoft Excel.


Journalism and news reporting will increasingly be automated. While human investigative journalists will still seek out stories, generative AI models will increasingly be used to produce the articles themselves. Soon, a large portion of the internet media we access on a regular basis will be produced by AI.


LLMs will be used by legislators to assist in the drafting of legislation. They will be used by regulators to assist in converting legislation into comprehensive rules and codes. They will be used to assist by bureaucrats from the federal to the local levels. The administration of the state's many operations, including processing permit requests and enforcing small fines.


In the academic world, generative language models will be used to create funding proposals, summarize and analyze the body of literature, and, yes, even to create research papers (both by students and professors). There will undoubtedly be a scandal involving students who use generative language technologies to have their essays written for them in class.


Generative language models will hasten scientific discovery in and of itself. The complete body of knowledge and research that has been published in an area will be assimilated by LLMs, who will then be able to offer solutions and exciting new research possibilities.


This has already been accomplished; it is not a hypothetical future scenario. UC Berkeley and Lawrence Berkeley National Laboratory researchers recently demonstrated that massive


Language models can extract latent knowledge from the body of existing materials science literature and then suggest novel materials to research.


It is important to quote straight from their Nature paper: "Here we demonstrate that materials science knowledge present in the published literature may be efficiently stored as information-dense word embeddings without human supervision. These embeddings encapsulate sophisticated materials science ideas like the periodic table's underlying structure and links between structure and property in materials without explicitly including chemical knowledge. We also show that materials for practical applications can be recommended by an unsupervised method years before they are actually discovered.


Extraordinary Language


Generative language models (LLMs) have the potential to transform software development, making it one of the most promising commercial uses of GLMs.


Languages are used in software programming, whether Python, Ruby, or Java. Programming languages are symbolically expressed, just like natural languages like English or Swahili, and have their own internally consistent syntax and semantics. It follows that the same potent new AI techniques that can acquire astounding familiarity with natural language should also be able to master programming languages.


The world of today is run by software. A half trillion-dollar market for software is thought to exist today. The modern economy now runs entirely on software. Therefore, the capacity to automate its manufacture represents an astoundingly vast


Microsoft is the 800-pound gorilla and the category's first mover. Microsoft introduced Copilot, an AI coding companion tool, earlier this year in collaboration with its subsidiary GitHub and its close partner OpenAI. Codex, a sizable language model from OpenAI, powers Copilot (which in turn is based on GPT-3).


Soon after, Amazon released CodeWhisperer, its own AI pair programming tool. Google has also created a comparable tool, however it is not made available to the public and is only used internally.


Even though these goods are still relatively new, it is clearly clear how disruptive they will be.


Using Google's AI code completion tool resulted in a 6% reduction in coding time, according to a recent research. 3% of the employees' code that is written by the AI belongs to people who aren't using the technology.


Even more impressive is new research from GitHub, which discovered that using Co-pilot can cut the time needed for a software engineer to accomplish a coding task by 55%. Up to 40% of the code created at the company, according to GitHub's CEO, is now produced by AI.


Now picture multiplying these productivity improvements across the whole Microsoft, Google, and software sector of today. Value creation worth countless billions of dollars is up for grabs.


Is Co-pilot from Microsoft destined to dominate this market? No, not always.


For starters, many businesses will prefer to engage with an impartial startup that implements its solution on-premises rather than exposing their whole internal codebases to a major tech giant like Microsoft in the cloud. This will be especially true in sectors with strict regulations, such as financial services and healthcare.


Copilot also faces an intriguing organizational problem because Microsoft, GitHub, and OpenAI all work together to build and maintain the product. These three organizations are distinct from one another, with distinctive teams, cultures, and cadences. Rapid product revisions and short development cycles will be crucial as technology and the industry improve because this sector is currently moving at breakneck speed. To compete with more agile rivals, the Microsoft/GitHub/OpenAI trio may have coordination and agility issues.


Most notably, the field of software development is vast and extensive. AI-generated software won't have a winner-take-all market. Similar to how the current software engineering stack has a rich, diversified ecosystem of tools, the field of AI code generation will see a lot of different victors.


For instance, startups that only automate code maintenance, code review, documentation, or front-end development may be profitable. Promising new firms have already flooded the market to take advantage of these chances.


Scaling Back


Three broad points are worthwhile after going over a variety of potential commercial uses for generative language models.


First, some readers may be wondering whether the use cases presented here are genuinely conceivable, especially those who have not had much experience dealing directly with today's language models. Will generative language models actually be capable of producing a contract, exchanging emails with a potential customer, or drafting a piece of legislation—not just in a strictly controlled demo or research context, but in the midst of all the chaos of the real world?


Yes, it is the answer.


In earlier pieces, we went in-depth on the technological advances supporting the current language AI revolution. However, one crucial point deserves to be made here: the vast majority of content produced by people—messages, ideas, and proposals—is not original.


It may come off as harsh. However, the majority of website material, emails, customer service interactions, and even majority of legislation actually include very little meaningful originality. Despite the differences in word choice, the underlying structure, semantics, and ideas are predictable and constant, echoing language that has been written or spoken a million times before.


The large corpora of current material that today's AI has been educated on allow it to learn these basic structures, semantics, and concepts, and when asked, it can convincingly duplicate them with new output.


The disruptive creativity of, example, Friedrich Nietzsche, whose ground-breaking ideas reframed decades of earlier thought, could not be produced by our existing state-of-the-art language models. But how much of the material that people produce on a daily basis, whether it be in one of the use cases mentioned above or in another situation, fits that description?


We will discover that a surprisingly big portion of human language production—those that are essentially non-original—can be effectively automated by LLMs.


The second broad observation is that each output from a language model can also be used as an input to another language model, which is a key factor in why generative language models will become so strong. This is because text is used as both the input and output modalities in language models. The main distinction between text-to-image models and language models is this. Though it may seem like a minor point, generative AI is significantly affected by this.


Why is this important? Because it makes it possible for "immediate chaining," as it has come to be known.


Despite the fact that huge language models are tremendously powerful, many of the jobs that we will want them to complete—specifically, activities that call for intermediate actions or multi-step reasoning—are too complex to be handled by a single run of the model. One big aim can be broken down into several smaller ones that the language model can handle one after the other, with the output of one subtask serving as the input of the next thanks to prompt chaining.


Thanks to sophisticated prompt chaining, LLMs can accomplish orders of magnitude more complex jobs than would otherwise be possible. Prompt chaining also enables models to access data from other tools by include this activity as one of the steps in the chain (such as searching Google or extracting information from a specified URL).


A new company named Dust has created tools that make prompt chaining simple to comprehend in order to deal with generative language models. The top three Google search results are used by Dust, a web search assistant, to gather information, summarize it, and then create a final response with citations to a user's question (for instance, "Why was the Suez Canal stopped in March 2021?").


Iterative and collaborative interaction will soon be the most natural way for humans to utilize LLM applications; in other words, the end user will be the human in the loop. For example, the human user might give the model an initial prompt (or prompt chain) to produce a specific output, review the output and then modify the prompt to enhance the output, run the model numerous times on the same prompt to choose the most pertinent versions of the model's output, and then manually refine this output before deploying the language for its intended use.


Many of the illustrative applications covered above will benefit from this kind of workflow: writing academic papers, news stories, and contracts


The technology will soon be sufficiently advanced and reliable for some lower-stakes use cases—like producing outbound sales emails or website copy—that customers driven by the potential productivity gains will feel comfortable automating the application end-to-end with no human in the loop.


On the other hand, some safety-critical use cases, such as employing generative models to diagnose and recommend therapies for specific patients, will likely require a person in the loop to review and approve the output of the models before any actual action is performed for the foreseeable future.


However, there is no denying that generative language technology is advancing quickly—almost impossibly quickly. Expect industry heavyweights like OpenAI and Cohere to deliver new models in the next months that represent significant advancements in language skills over current models.


Longer future, the tendency will be clear-cut and unavoidable: as these models improve and the products created on top of them become simpler to use and more thoroughly integrated into current workflows, we will entrust AI with a growing number of societal responsibilities with little to no human oversight. More and more of the aforementioned use cases will be completed end-to-end and in a closed-loop by language models that we have given the authority to make decisions and take action.


Readers now could find this intriguing or even alarming. But as time goes on, we'll get used to the idea that computers can perform many of these tasks more efficiently, swiftly, inexpensively, and reliably than people could.


There will be significant disruption, enormous value creation, painful job displacement, and numerous new, multi-billion-dollar AI-first enterprises in the near future.


 


A Partner at Raylteck, the author is to be noted.