generative AI models open new opportunities for content creation in businesses
Generative AI refers to a category of artificial intelligence algorithms that are capable of creating new and unique content based on a set of input data or parameters. These algorithms are trained on a large dataset and can generate new data that is similar to the training data, but not identical. Generative AI is often used in applications such as image and video synthesis, natural language generation, and speech synthesis.
One of the most popular types of generative AI is the Generative Adversarial Network (GAN). GANs consist of two neural networks, a generator and a discriminator, that work in opposition to one another. The generator creates new data, while the discriminator attempts to distinguish the generated data from the real data. Through this process, the generator learns to create data that is more similar to the real data, and the discriminator becomes better at identifying fake data. This process continues until the generated data is indistinguishable from the real data. GANs have been used to generate realistic images, videos, and even 3D models.
Generative AI is a powerful tool that has the potential to revolutionise various industries, but to fully realise its capabilities, human input is essential.
The process of fine-tuning and optimising the model requires a human touch, as does interpreting and applying the output. It's a partnership between man and machine, where the creativity and intuition of humans are combined with the analytical abilities of AI to achieve a synergistic effect.
The output can also be diverse, ranging from new content, translations, answers to questions, analysis of sentiments, summaries, and even videos. These "universal content machines" have an endless number of potential applications in business, some of which we will discuss later on. They can automate content creation, provide insights through sentiment analysis and much more. The possibilities are truly limitless.
Generative models offer a wide range of possibilities across various business functions, but they are most commonly utilised in the realm of marketing. One example is Jasper, a marketing-specific version of GPT-3 that can generate a plethora of customer-facing content such as blogs, social media posts, web copy, sales emails, advertisements, and more. Jasper is a true expert in its field, continually refining its output through A/B testing and optimising its content for search engine placement. Additionally, Jasper fine-tunes its GPT-3 models with the best outputs from its customers, resulting in substantial improvements, as reported by Jasper's executives. While most of its customers are individuals and small businesses, some larger companies also make use of its capabilities. For instance, at VMWare, a cloud computing company, writers utilise Jasper to generate original content for marketing efforts, from emails to product campaigns to social media copy.
Moreover, Kris Ruby, the owner of Ruby Media Group, a public relations and social media agency, is now utilising both text and image generation from generative models. She believes that these tools are highly effective at maximising search engine optimisation (SEO) and in PR, for crafting personalised pitches to writers. However, she also notes that these new tools open up a whole new frontier of copyright challenges, and that's why she assists her clients in creating AI policies. Kris believes that the use of these tools is a delicate balancing act between human creativity and AI capabilities. "The AI is 10%, I am 90%", as there is so much prompting, editing and iteration involved.
Additionally, DALL-E 2 and other image generation tools are already being leveraged for advertising purposes. For example, Heinz used an image of a ketchup bottle with a label similar to Heinz's to argue that "This is what 'ketchup' looks like to AI." Of course, it only meant that the model was trained on a relatively large number of Heinz ketchup bottle photos. Nestle used an AI-enhanced version of a Vermeer painting to help promote one of its yogurt brands. Stitch Fix, the clothing company that already uses AI to recommend specific clothing to customers, is experimenting with DALL-E 2 to create visualisations of clothing based on requested customer preferences for colour, fabric, and style. Mattel is using the technology to generate images for toy design and marketing. It's exciting to see how these cutting-edge tools are being used to enhance advertising campaigns and bring new levels of creativity to the industry.
GPT-3 has proven to be a powerful generator of computer program code, making it a valuable asset for developers. Its Codex program, which is specifically trained for code generation, can produce code in various programming languages when given a description of a “snippet” or small program function. Microsoft’s Github also has a version of GPT-3 for code generation called CoPilot. The latest versions of Codex can even identify bugs and fix mistakes in its own code and even explain what the code does. The goal of Microsoft is not to replace human programmers but to make tools like Codex or CoPilot to be "pair programmers" with humans to enhance their speed and effectiveness. This can be a game-changer in the world of coding and programming, making the development process faster and more efficient.
LLM-based code generation, like Codex, has been met with positive reviews for its ability to generate small code snippets with ease. However, the true test of its capabilities lies in its integration into larger programs and its ability to adapt to specific technical environments. This is where human programming expertise is still crucial. Deloitte, after months of experimentation, found that Codex not only increases productivity for experienced developers but also enables programming capabilities for those with no experience. It's clear that the use of these tools can bring a new level of efficiency and accessibility to the world of programming and development.
Deloitte recently conducted a pilot program with 55 developers for six weeks, using Codex as a code generation tool.
The results were impressive, with a majority of users rating the accuracy of the resulting code at 65% or better. The pilot found a 20% improvement in code development speed for relevant projects.
The firm also utilised Codex to translate code from one language to another. Despite the success of Codex, Deloitte concluded that professional developers would still be needed in the foreseeable future, but the increased productivity from Codex could potentially decrease the number of developers required. Like with other generative AI tools, Deloitte found that the better the prompt, the better the output code. It is clear that the use of Codex and other AI tools can bring a new level of efficiency and productivity to the field of code development, but it is not replacing the role of human programmers.
As Language Models become more advanced, they are transforming the way conversational AI or chatbots function. With their ability to understand conversation and context awareness at a higher level, they are providing more human-like interactions. Facebook's BlenderBot, for instance, is specifically designed for dialogue and can hold long conversations with humans, while maintaining context. Google's BERT and LaMBA, are also being utilized to understand search queries and create more advanced and sophisticated chatbot experiences. Even Google's engineers have been impressed, with some believing that the LLMs were sentient beings, despite their ability to simply predict words used in conversation based on past conversations.
Even Google's engineers have been impressed, with some believing that the LLMs were sentient beings, despite their ability to simply predict words used in conversation based on past conversations.
Despite their advancements, Language Models are far from being the ultimate conversationalists. With their training based on past human content, they tend to replicate any racist, sexist or biased language that they were exposed to during their training. While the companies that created these systems are working towards filtering out hate speech, it's still a work in progress.
Generative language models are transforming the way companies organise and access their knowledge. With their ability to process and understand large volumes of text-based information, LLMs can be fine-tuned to become powerful tools for knowledge management within an organisation. Instead of relying on traditional manual methods to create structured knowledge bases, companies can now harness the power of LLMs to easily access and retrieve information with simple prompts. This is an exciting new application of these models and has the potential to greatly streamline the way companies manage their internal knowledge.
Companies like Morgan Stanley are turning to LLMs like OpenAI's GPT-3 to fine-tune their training on specific industries, such as wealth management, making it easier for employees to search for existing knowledge within the company, and creating tailored content for clients.
While these systems have the potential to streamline the knowledge management process, it's important to note that users may require training to create effective prompts, and that the outputs of the LLMs may still need review before being applied. Despite this, LLMs have the potential to revolutionise the field of knowledge management and allow companies to scale their knowledge more efficiently and effectively.
As the capabilities of generative AI systems continue to advance, so too do the legal and ethical issues surrounding their use. One such issue is the emergence of "deepfakes", artificially generated images and videos that can be used to mislead or deceive. In the past, creating deepfakes required significant technical expertise, but with the democratisation of these tools, virtually anyone can now produce them.
In response, OpenAI has implemented a "watermarking" system to identify and track fake images generated by DALL-E 2, but as the technology for creating generative videos becomes more accessible, there will likely be a need for further safeguards and regulations to prevent their misuse. Another concern is the question of ownership of the content created by generative AI, as the output is influenced by the vast amount of data used to train the models.
As these technologies continue to evolve, they will bring substantial work for intellectual property attorneys in the coming years.
The power of generative AI is just beginning to be understood by businesses. With the ability to generate a wide range of written and visual content, these systems are poised to revolutionise the way we create and manage information.
As these technologies continue to evolve, we can expect to see them increasingly integrated into our daily work lives, from crafting emails and reports to providing first drafts of computer programs and presentations. However, with this new level of automation comes a host of legal and ethical questions, particularly around issues of ownership and intellectual property protection. As we continue to unlock the potential of these systems, we must be prepared to navigate the complex and far-reaching implications they may have on knowledge and creativity.