In 1997 in New York, the IBM computer Deep Blue won a chess match against Garry Kasparov. It was the first time a machine defeated the world chess champion under tournament conditions.
In 2011, another IBM computer—Watson—took part in the television quiz show “Jeopardy” to compete against its former winners. Watson had to listen to questions and give answers in a natural human language.
The computer was not connected to the internet.
However, it learned from 200 million pages of structured and unstructured content that took up four terabytes of disk storage. Watson won the first prize of $1 million.
In March 2016, AlphaGo—a computer program from Google DeepMind created to play the board game Go—beat Lee Sedol, the World Champion. The man and the machine played a five-game tournament in Seoul. Lee Sedol won only the fourth game.
AI has already led to breakthroughs in medical diagnostics.
In an experiment in 2013, artificial intelligence was was tasked with detecting breast cancer. A neural network was trained to find signs of cancer using tens of thousands of mammographic images of the disease.
But the neural network has learned that it is not so important to look for the tumors themselves, but rather some other modifications of the tissue which aren’t in the immediate vicinity of the tumor cells. This was a new and important development in breast cancer detection.
Magenta is a Google Brain project, and its objective is to figure out whether machine learning can be used to create compelling art and music, and how we should go about it.
The team that created Magenta used TensorFlow, a Google machine learning library. Have a listen to Magenta’s first computer-generated song, composed without any human assistance.
In February 2016 in San Francisco, Google sold 29 paintings on a charity auction. All of them were made by Google’s artificial intelligence.
And that’s not all AI can do. It can also drive motorways, write poems, and much more.
What it can’t do is write code. Or can it?
In December 2015, Google released the TensorFlow library to the public. Now it’s an open-source software for machine learning.
Why did Google give out this powerful piece of software for free? According to prof. Christian Bauckhage from Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme, Germany (IAIS), you can find the answer in Google’s history. About 10 years ago, Google open-sourced the Android Operating System for smartphones. Nowadays, 85% of all smartphones in the world run on Android.
“This is what they are trying to do right now. 10 years from now, the idea is that 80% of AI will run on Google TensorFlow,” prof. Bauckhage said at the CeBIT Conference in 2016.
A few weeks after Google’s release, Microsoft open sourced their Computational Network Toolkit AI, a deep learning framework now called the Microsoft Cognitive Toolkit.
After another few weeks, Facebook open sourced their own artificial intelligence library called Caffe2.
In 2015, Andrej Karpathy, a former Stanford Computer Science PhD student now Director of AI at Tesla, used Recurrent Neural Networks to generate code. He took a Linux repository (all the source files and headers files), combined it into one giant document (it was more than 400 MB of code) and trained the RNN with this code.
He left it running for the night. In the morning, he got this:
Sample code generated by Artificial Intelligence
Literally overnight, the AI-generated code including functions and function decorations. It had parameters, variables, loops and correct indents. Brackets were opened and later closed. It even had comments.
The AI made some mistakes of course. In some instances, variables were not used. In others, variables which had not been declared earlier were used. But Karpathy was satisfied with the result.
“The code looks really quite great overall. Of course, I don’t think it compiles but when you scroll through the generate code it feels very much like a giant C code base,” Karpathy wrote on his blog.
Microsoft and Cambridge University researchers have developed artificial intelligence that can write code and called it DeepCoder.
The tool can write working code after searching through a huge code database. It then tries to make the best possible arrangement for the harvested code fragments and improves its efficiency over time.
Yet, this doesn’t mean the AI steals code, or copy-pastes it from existing software, or searches the internet for solutions. The creators of DeepCoder expect that it will participate in programming competitions in the near future.
Sample program in Domain Specific Language (DSL) created by DeepCoder
According to Marc Brockschmidt of Microsoft Research, who is a part of the project, such system could be very useful to non-coders. They'd only have to describe their program idea and wait for the system to create it.
“We might end up having such system in the next few years. But for now, DeepCoder’s capabilities are limited to programs consisting of five lines of code,” he said.
You can find DeepCoder’s documentation here.
Since this is a primarily Python-focused blog, we would be remiss if we didn’t give you at least one Python example.
In June 2016, a French engineer by the nickname of BenjaminTD published a blog post in which he explained how he was “teaching an AI to write Python code with Python code.”
He used Long Short Term Memory, one of the most popular architectures of recurrent neural networks. He fed it with lots of Python code (using libraries such as Pandas, Numpy, Scipy, Django, Scikit-Learn, PyBrain, Lasagne, Rasterio). The combined file weighed 27MB.
The AI then generated its own code. It was defining inits:
...using boolean expressions:
...and creating arrays:
If you look at the arrays carefully, you will find a syntax error. Benjamin’s code is far from perfect. But the engineer thinks that it’s not bad for a network that had to learn everything from reading example code.
“Especially considering that it is only trying to guess what is coming next character by character,” he argued in his blog post.
Diffblue, a company that had spun out of the University of Oxford’s Computer Science department, released a tool that allows developers to harness the power of AI to generate unit tests for code.
Writing unit tests is often seen as a necessary evil by programmers, so the launch of the product will be a welcome respite for many of them. It will also be the first time that such a tool has been made available to the whole community at no cost as Diffblue Playground or Diffblue Cover.
According to Peter Schrammel, Diffblue’s CTO, access to AI-powered automated unit testing tools had been limited to commercial enterprises before.
Diffblue’s use of AI allows it to mimic the way human developers carry out tests to make sure their code performs correctly. Moreover, the tool takes just seconds to generate the tests, and requires no extra effort from the user.
The technology behind Diffblue is a significant contribution to the developers’ community as it allows anyone, from an aspiring programming student to a highly-qualified professional, to save time while generating tests and rely on the AI-powered tool to do all of the legwork for them.
Another tool that takes advantage of AI to make developers’ life easier and increase their productivity is Microsoft's Visual Studio IntelliCode.
It’s the the next-generation version of IntelliSense, the highly popular code completion tool. It was made generally available in May 2019.
While IntelliSense would provide the user with an alphabetical list of recommendations, scrolling through which could prove troublesome and time-consuming, IntelliCode recommends the most likely method or function based on the developer’s previous usage. The more it’s used, the more accurate its predictions become.
To make it effective at providing developers with contextual recommendations, the makers of IntelliCode “fed” the tool the code of thousands GitHub open-source projects that had at least 100 stars.
Although using the tool doesn’t guarantee the code will be error-free, what it does is enhance the coding experience and help developers boost their productivity.
One of the latest tools that claims to auto-generate code using AI, helping programmers speed up their work, is GitHub Copilot.
Described as “Your AI pair programmer,” this extension to Visual Studio Code has been trained on billions of lines of public code and works with a number of frameworks and languages. The tool is powered by Codex, a new AI system built by OpenAI.
According to its creators, Copilot is fast enough to be used as you type, allowing you to quickly browse through alternative suggestions and manually edit suggested code. The tool also adapts to your edits, gradually “learning” to match your coding style and preferences.
Although many of the reviews Copilot gathered were positive, there have also been some critical voices.
The Free Software Foundation has branded the tool “unacceptable and unjust” and called for white papers that address the legal and philosophical questions raised by it.
Firstly, Copilot requires running software that is not free, such as Microsoft’s Visual Studio IDE or Visual Studio Code editor. Secondly, the tool is a “service as a software substitute,” which in practice means handing someone power over your own computing.
The Foundation said that Copilot’s use of freely licensed software has serious implications for the free software community and that the code snippets and other elements copied from GitHub-hosted repositories could result in copyright infringement.
The fast.ai blog found that “the code Copilot writes is not very good code” and it is “generally poorly refactored and fails to take full advantage of existing solutions.”
The technology is still in an early preview. According to the blog author, to become a truly helpful tool, it would need to “go beyond just language models, to a more holistic solution that incorporates best practices around human-computer interaction, software engineering, testing, and many other disciplines.”
In November 2017, Andrej Karpathy published a blog post titled Software 2.0 in which he argued that there has been a fundamental paradigm shift in how humans build software.
According to Karpathy, there is a new trend in software development that is able to rapidly advance the process, minimize human involvement and improve our ability to solve problems.
The emergence of Software 2.0, Karpathy argued, means that developers will no longer need to write code. They will just find the relevant data and feed it into machine learning systems which will then write the required software.
A division of labor, he predicted, will ensue: “2.0 programmers will manually curate, maintain, massage, clean and label datasets,” while 1.0 programmers will “maintain the surrounding tools, analytics, visualizations, labeling interfaces, infrastructure, and the training code.”
According to Karpathy, Software 2.0 will be written in “much more abstract, human unfriendly language,” and no humans will be involved in it as such.
Karpathy’s article attracted a lot of criticism, with some experts questioning whether software engineering, the way it’s done now, will indeed become redundant in the foreseeable future.
Instead of being made obsolete by artificial intelligence, human developers are more likely to harness its potential to reduce certain repetitive and time-consuming tasks and automate processes.
The Hollywood fiction of AI supplanting humans hasn’t come true yet. We are far from 2001: A Space Odyssey-like scenarios of rogue AI turning against its human masters and killing off space crews.
That does not stop filmmakers from generously employing the theme of an AI rebellion in their works.
But can we be so sure that real-life AI can be controlled?
In 2016, Microsoft released a Twitter bot called Tay. It was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. After just 16 hours following its launch, Microsoft was forced to shut Tay down because the bot began to post offensive tweets.
That’s not the only AI issue on record. In early 2017, Facebook had to shut down its bots, Bob and Alice. They were created to perform conversations between human and computer. But when the bots were directed to talk with each other, they started to communicate in a way that was impossible for people to understand.
A few months later a Chinese chatbot Baby Q was switched off after it started to criticize the Chinese Communist Party. Baby Q called it “a corrupt and incompetent political regime.”
So, is AI a threat or an opportunity? Elon Musk is known for his scepticism about the technology. His worry is what will happen when the machine becomes smarter than the human.
“Even in the benign scenario, if AI is much smarter than a person, what do we do? What job do we have?” he asked.
There is no doubt that computers will be much better at programming in the near future than they are now. Which brings us to a quite scary conclusion.
“It’s just a matter of time until neural networks will produce useful code. So things are looking bleak for computer scientists like me,” prof. Bauckhage believes.
But is the future really that dark? According to Armando Solar-Lezama of MIT, tools like DeepCoder do have the potential to automate code development, but AI isn’t going to take away the jobs of developers. Instead, a system based on program synthesis can be used to automate the tedious parts of code development while the developers focus on complex tasks.
“Eventually, yes. But by that point, society will be very used to dealing with that kind of societal change. The millions of paid drivers replaced by self-driving cars will have long since forced our political and economic systems to figure out how to deal with these transitions. We have joked around the office that software development will be one of the last professions left.”
—Will Iverson, CTO at Dev9
Regardless of whether our worries are justified, the fact is that nearly a third of software developers fear that artificial intelligence will eventually take their jobs. In an Evans Data Corp. survey, 550 software programmers were asked about the most worrisome thing in their careers. The most common response (29%) was:
“I and my development efforts are replaced by artificial intelligence.”
According to Janel Garvin, CEO of Evans Data, the concern about becoming obsolete due to the spread of AI-powered tools “was also more threatening than becoming old without a pension, being stifled at work by bad management, or by seeing their skills and tools become irrelevant.”
There is no doubt that the technology will continue developing and growing smarter. Eventually, it might become smarter than humans. How can we handle such a possibility? Stephen Hawking also saw a real danger that computers will develop intelligence. But he also offered advice:
“We urgently need to develop direct connections to the brain so that computers can add to human intelligence rather than be in opposition,” Hawking said.
Should you start looking for AI to make your software specifications a reality?
Probably not yet. It will take some time before AI will be able to create actual, production-worthy code spanning more than a few lines.
Software development is an inherently complex endeavor. The process of creating code from scratch consists of a number of elements that need to blend together seamlessly to form a functional product.
Although advances in AI have been plentiful and far-reaching, the technology on its own certainly isn’t enough to replace humans, and it doesn’t look like it will be able to any time soon.
Even if AI-powered machines can be used to work in collaboration with humans to produce code, it will take some time before they can learn to interpret the business value of each feature and advise on what to develop next.
Instead of wondering if machines will take developers’ jobs, sticking to human programmers and designers that have the know-how and the creativity to deliver software your users love seems to be a better use of time.