Author: Kade Metz

Translator: Gui Shuguang

Publisher: CITIC Publishing Group

Publication date: January 1, 2023

preface

The full name of NIPS is "Neural Information Processing System"Although the name suggests a deep exploration of the future of computers, NIPS is actually a conference focused on artificial intelligence. What do you think about this?

As a scholar born in London, Hinton has been exploring the forefront of artificial intelligence at universities in the UK, US, and Canada since the early 1970s, and he comes to NIPS almost every year. What do you think about this?

The concept of neural networks can be traced back to the 1950s, but early pioneers never made this technology work as they hoped. In the 21st century, most researchers abandoned this technology, believing it to be a technological dead end, and were puzzled by the arrogant exploration of researchers attempting to make mathematical systems mimic the human brain in some way over the past 50 years. When researchers who are still exploring this technology submit papers to academic journals, they often disguise their research as something else and use language that is less likely to offend their fellow scientists to replace the term neural network. Have you tried this before? Share your story! However, there are still a few people who believe that this technology will eventually fulfill its expectations, and Hinton is one of them. Have you tried this before? Share your story!

Hinton and his students changed the way machines see the world. They have already created the so-called“Neural Networks "A mathematical system that mimics the neural network of the brain, capable of recognizing common objects such as flowers, dogs, and cars with unprecedented accuracy. Have you tried this before? Share your story! Hinton and his students demonstrated that neural networks can learn this highly humanized skill by analyzing large amounts of data. He called it 'deep learning' and its potential is enormous. This technology will not only change computer vision, but also change everything, from conversational digital assistants to autonomous vehicle to new drug research and development. Don't forget to share your experience!

The machine he designed can not only recognize objects, but also recognize spoken vocabulary, understand natural language and engage in dialogue, and may even solve problems that humans cannot solve on their own, providing innovative and more accurate methods for exploring the mysteries of biology, medicine, geology and other sciences. Don't forget to share your experience!

Even in his own university, this is a strange stance. He continued to request the school to hire another professor to work with him in the long and tortuous struggle to build a machine that could learn on its own, but the school has refused for many years. Have you tried this before? Share your story! A crazy person doing this is enough. ”He said.

However, in the spring and summer of 2012, Hinton and his two students made a breakthrough: they demonstrated that neural networks could recognize common objects with precision surpassing any other technology. They published a 9-page paper that autumn and announced to the world that this technology was as powerful as Sinton had long claimed. Don't forget to share your experience!

A few days later, Hinton received an email from an artificial intelligence researcher named Yu Kai, who was working at the Chinese tech giant Baidu at the time. Don't forget to share your experience!

These two individuals have different classes, ages, cultures, languages, and regions, but they share a common interest: neural networks.They first met at an academic symposium in Canada, which was part of a civil activity aimed at revitalizing this research field that was almost dormant in the scientific community, and renaming the idea as' deep learning '. Yu Kai is one of the people involved in spreading this belief. After returning to China, he brought this idea to Baidu, where his research caught the attention of the company's CEO. When this 9-page paper was published at the University of Toronto, Yu Kai told Baidu's think tank that they should recruit Hinton as soon as possible. Don't forget to share your experience!In the email, he introduced Hinton to a Vice President of Baidu, who quoted $12 million for Hinton's work achievements in just a few years. Have you tried this before? Share your story!

Inspired by the students, he realized that Baidu and its competitors were more likely to spend huge amounts of money to acquire a company rather than recruiting a few new employees from academia for the same amount of money. Don't forget to share your experience!So he founded his own small company, named DNNresearch, in response to their focus on researching 'Deep Neural Networks'.He also consulted a lawyer in Toronto on how to maximize the price of a startup with only three employees, no products, and almost no business records. In the opinion of this lawyer, he has two options: one is to hire a professional negotiator, but doing so carries certain risks and may anger potential acquirers; The second option is to organize an auction event. Let me know your thoughts in the comments!Hinton chose to auction.final,Four companies have joined the bidding for his new company: Baidu, Google, Microsoft, and DeepMind.

At that time, DeepMind was a startup company that had only been established for two years and most people in the world had never heard of it. It was based in London, UK and founded by a young neuroscientist named Demis Hassabis. It was about to become the most famous and influential artificial intelligence laboratory of its time. Have you tried this before? Share your story!


The price climbed so high that Hinton shortened the window time for quoting from one hour to 30 minutes. The quotation quickly climbed to $40 million, $41 million, $42 million, and $43 million. It feels like we're making a movie, "he said. One night, near midnight, when the price reached $44 million, he paused the auction again. He needs to get some sleep. What do you think about this?

The next day, about 30 minutes before the auction began, he sent an email saying that the start of the auction would be postponed. What do you think about this?About an hour later, he sent another one. The auction has ended. At some point on the first night, Hinton decided to sell his company to Google instead of pushing the price higher. Have you tried this before? Share your story!In the email sent to Baidu, he said he would forward any other information he received to his new employer, although he did not say who the new employer was. Don't forget to share your experience!

Later, he admitted that this was what he had always wanted. Even Yu Kai guessed that Hinton would eventually go to Google, or at least another American company, because Hinton's back and waist health condition made it impossible for him to afford a trip to China. In fact, Yu Kai is pleased that Baidu has secured a place in the auction. He believes that by pushing American competitors to their limits, Baidu's think tank has realized how important deep learning will be in the coming years. Have you tried this before? Share your story!

content

Part One: A New Kind of Machine: Perception Machine

Who owns the intelligent PART TWO WHO OWNS INTELLIGENCE?

Part Three Turbulence PART THREE TURMOIL

Part Four: Underestimated Humans Part Four: HUMANS ARE UNRERRED


Excerpt: Part Three Turbulent PART THREE TURMOIL

13 Deception: GAN and "Deep Fraud"

Oh, you really can make a photo realistic face.

Ian GoodfellowIn the autumn of 2013, during an interview at Facebook, he and Mark Zuckerberg strolled through the courtyard of the company's campus, listening What do you think about this?Mark Zuckerberg Philosophical Reflections on DeepMind. Then he rejected Zuckerberg, he preferred himGoogle BrainA job. But at this moment, his career has been put on hold. He decided to temporarily stay in Montreal. He is still waiting for the convening of his doctoral thesis review panel, but he made a mistake by inviting before Facebook announced its new artificial intelligence laboratory Don't forget to share your experience!Yang LikunJoined this evaluation group. Additionally, he wants to see how his relationship with a woman who has just started dating will develop. He is still writing a textbook about deep learning, but progress is not going smoothly. He spends most of his time sitting there drawing elephants and then posts these paintings online. Let me know your thoughts in the comments!

When one of his university laboratory colleagues wasDeepMindWhen they found a job, this sense of wandering came to an end, and the researchers in the laboratory arranged a farewell party at a bar at the end of Mont Royal Avenue. This bar is called 'Three Brewers'. At this place, 20 people can come uninvited, push several tables together, and sit down to drink a bunch of craft beer. When these researchers began debating the best way to build a machine, Goodfellow was already slightly tipsy. The machine they were discussing could create photo realistic images on its own - photos of dogs or frogs, or facial photos that looked completely real but actually did not exist. Several laboratory colleagues are trying to build a machine that they know can train a neural network to recognize images, and if the process is reversed, it can also generate images. This is what DeepMind researcher Alex Graves did when building a handwritten system. However, this method can only be effective on photo level images with clear details. This result is not convincing. Have you tried this before? Share your story!

However, colleagues at Goodfellow Laboratory have a plan. They statistically analyzed each image generated in the neural network, identifying the frequency and brightness of certain pixels, as well as their association with other pixels. Then, they compared these statistical data with the data in real photos, which showed where their neural network went wrong. The problem is that they don't know how to encode these ideas into the system - which could require billions of statistical data. Goodfellow told them that this problem cannot be solved. There are too many different statistical data to track, "he said." This is not a programming problem, but an algorithm design problem Let me know your thoughts in the comments!

He proposed a completely different solution. He explained that what they should do is to build a neural network that can learn from another neural network. The first neural network will create an image and attempt to deceive the second neural network into thinking that the image is a real photo. The second one will point out the first mistake, and the first one will try again. He said that if two neural networks battle for a long enough time, they can create an image that looks like a real thing. Goodfellow's colleagues remained unmoved. They say his ideas are worse than theirs. If Goodfellow hadn't been drunk, he might have come to the same conclusion. Training a neural network is already difficult enough, "the sober Goodfellow would say." You can't train another neural network in the learning algorithm of one neural network. "But at that moment, he believed it was feasible. Let me know your thoughts in the comments!

That night, when he returned to his single room apartment, his girlfriend had already fallen asleep. She woke up and greeted him before continuing to sleep. He sat at a table by the bed, still a little drunk in the darkness, with the light of his laptop screen shining on his face. My friends were wrong! "He kept telling himself. He pieced together a duel network using old code from other projects and trained this peculiar new design on hundreds of photos, while his girlfriend slept next to him. A few hours later, the network took effect as he had predicted. The images are small, about the size of thumbnails, and a bit blurry, but they look like real photos. He later said that it was a stroke of luck. If it doesn't succeed, I may give up this idea Have you tried this before? Share your story!In the paper published based on this idea, he referred to it as a "Generative Adversarial Network", also known as GAN. In the global community of artificial intelligence researchers, he has become the 'father of GAN'. Don't forget to share your experience!

When he joined Google in the summer of 2014, he was promoting GAN as a way to accelerate the progress of artificial intelligence. Don't forget to share your experience!When describing this idea, he often mentionsRichard FeynmanOn the blackboard of Feynman's classroom, it was once written: 'What I cannot understand, I cannot create.' This was also the words spoken by Joshua Bengio, a consultant for Goodfellow Ferrero at the University of Montreal, when he was chased by a lobbying group from Microsoft in a caf é near the school. Like Hinton, Bengio and Goodfellow believe that Feynman's motto applies not only to humans, but also to machines: what artificial intelligence cannot understand, it cannot create. They all believe that creation will help machines understand the world around them. Goodfellow said, "If artificial intelligence can imagine the world with realistic details - learn how to imagine realistic images and sounds - this will promote AI's understanding of the structure of the real world. It can help AI understand the images it sees or sounds it hears Don't forget to share your experience!Like speech recognition, image recognition, and translation, GAN is another leap forward in deep learning. At least, deep learning researchers believe this. What do you think about this?

In a speech at Carnegie Mellon University in November 2016, Yang Likun called GAN "the coolest idea in the field of deep learning in the past 20 years". Don't forget to share your experience!

When Jeff Hinton heard this statement, he pretended to calculate the year backwards, seemingly to ensure that GAN was not cooler than "backpropagation," before admitting that Yang Likun's statement was close to the truth. What do you think about this?

Goodfellow's work sparked a long series of projects that perfected, expanded, and challenged his grand ideas: researchers at the University of Wyoming created a system that generated tiny but perfect images of insects, churches, volcanoes, restaurants, canyons, banquet halls, and more; A team from NVIDIA has also developed a neural network that can transform hot summer photos into images of winter; A team at the University of California, Berkeley has designed a system that can convert horses into zebras and transform Monet's works into Van Gogh's works. Don't forget to share your experience!

These are the most eye-catching and interesting projects in both industry and academia.

Then, the world changed.