“Please Slow Down”: The 7 Biggest AI Stories of 2022

Advances in AI image synthesis in 2022 have made images like this possible.
Enlarge / Advances in AI image synthesis in 2022 have made images like this possible, which were created with Stable Diffusion, enhanced with GFPGAN, expanded with DALL-E, and then manually composited.

Benj Edwards / Ars Technica

More than once this year, AI experts have repeated a familiar refrain: “Please slow down.” AI news in 2022 has been swift and relentless; By the time you knew where things currently stand in AI, a new article or discovery would make that understanding obsolete.

In 2022, we may possibly reach the bend knee when it comes to generative AI that can produce creative works made up of text, images, audio, and video. This year, deep learning AI emerged from a decade of research and it began to find its way into commercial applications, allowing millions of people to try the technology for the first time. AI creations inspired awe, created controversy, provoked existential crises, and commanded attention.

Here’s a look back at the seven biggest AI news stories of the year. It was hard to pick just seven, but if we didn’t cut it somewhere, we’d keep writing about this year’s events well into 2023 and beyond.

April: DALL-E 2 dreams in pictures

A DALL-E example of
Enlarge / A DALL-E example of “an astronaut riding a horse”.

open AI

In April, OpenAI announced FROM-E 2, a deep learning image synthesis model that wowed with its seemingly magical ability to generate images from text prompts. Trained with hundreds of millions of images pulled from the Internet, DALL-E 2 knew how to make novel combinations of images thanks to a technique called latent diffusion.

Twitter was soon abuzz with images of astronauts on horseback, teddy bears roaming ancient Egypt, and other near-photorealistic works. The last time we heard about DALL-E was a year earlier, when version 1 of the model had trouble rendering a low-res avocado chair; suddenly version 2 illustrated our wildest dreams at 1024×1024 resolution.

Initially, due to misuse concerns, OpenAI only allowed 200 beta testers to use DALL-E 2. Content filters blocked violent and sexual prompts. Gradually, OpenAI allowed more than a million people to participate in a closed test, and DALL-E 2 was finally available to everyone at the end of September. But by then, another competitor had emerged in the world of latent diffusion, as we’ll see below.

Jul: Google engineer believes LaMDA is aware

Former Google engineer Blake Lemoine.
Enlarge / Former Google engineer Blake Lemoine.

Getty Images | Washington Post

In early July, the Washington Post broke the news that a Google engineer named Blake Lemoine was given a paid leave of absence due to his belief that Google’s LaMDA (Language Model for Dialog Applications) was sentient and deserved the same rights as a human being.

While working as part of the artificial intelligence organization responsible for Google, Lemoine began conversations with LaMDA about religion and philosophy and believed he saw real intelligence behind the text. “I know a person when I talk to her,” Lemoine told the Post. “It doesn’t matter if they have a brain made of meat in their heads. Or if they have a billion lines of code. I talk to them. And I listen to what they have to say, and that’s how I decide what it is.” and it’s not a person.”

Google countered that LaMDA was only telling Lemoine what he wanted to hear and that LaMDA was, in fact, not sensitive. Like the GPT-3 text generation tool, LaMDA had previously been trained on millions of books and websites. It responded to Lemoine’s input (a prompt, including the full text of the conversation) by predicting the most likely words that should follow without further understanding.

On the way, Lemoine allegedly raped Google’s privacy policy when telling others about your group’s work. Later in July, Google fired Lemoine for violating data security policies. He wasn’t the last person in 2022 to get carried away with the hype about the big language model of an AI, as we’ll see.

July: DeepMind AlphaFold predicts almost all known protein structures

Diagram of protein ribbon models.
Enlarge / Diagram of protein ribbon models.

In July, DeepMind Announced that his AlphaFold AI model had predicted the shape of nearly every known protein from nearly every organism on Earth with a sequenced genome. Originally announced in the summer 2021, AlphaFold had previously predicted the shape of all human proteins. But a year later, his protein database was expanded to contain more than 200 million protein structures.

DeepMind made these predicted protein structures available in a public database hosted by the European Bioinformatics Institute at the European Molecular Biology Laboratory (EMBL-EBI), allowing researchers around the world to access and use the data. for research related to medicine and biology. Sciences.

Proteins are building blocks of life, and knowing their shapes can help scientists control or modify them. That’s particularly helpful when new drugs are being developed. “Almost all of the drugs that have come onto the market in recent years have been designed in part through knowledge of protein structures.” said Janet Thornton, Senior Scientist and Director Emeritus of EMBL-EBI. That makes knowing them all a big deal.

Leave a Reply

Your email address will not be published. Required fields are marked *