AI Index 2019 assesses global AI research, investment, and impact

Leaders in the AI community came together to release the 2019 AI Index report today, an annual attempt to examine the biggest trends shaping the AI industry, breakthrough research, and AI’s impact to society.

It also examine trends like AI hiring practices, private investment, AI research contributions by nation, researchers leaving academia for industry, and how much AI plays a role in specific industries. The report also notes strides in the reduction of the amount of time it takes to train AI systems and computing costs, two of the biggest hindrances to AI adoption rates.

“In a year and a half, the time required to train a large image classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds in July, 2019,” the report reads.

Some highlights:

  • AI is the most popular area for computer science PhD specialization, and in 2018, 21% of graduates specialized in machine learning or AI
  • From 1998 to 2018, peer-reviewed AI research grew by 300%
  • In 2019, global private AI investment was over $70B, with startup investment – $37B, M&A – $34B, IPOs – $5B, and Minority Stake – $2B. Autonomous vehicles led global investment in the past year ($7 billion), followed by drug and cancer, facial recognition, video content, fraud detection, and finance
  • China now publishes as many AI journal and conference papers per year as Europe, having passed the USA in 2006
  • More than 40% of AI conference paper citations are attributed to authors from North America, and about 1 in 3 come from East Asia
  • Nations like Netherlands, Denmark, and Argentina lead the world in the number of
  • Singapore, Brazil, Australia, Canada and India experienced the fastest growth in AI hiring from 2015 to 2019
  • The vast majority of AI patents filed between 2014-2018 were filed in nations like the U.S., Canada, and 94% of patents are filed in wealthy nations
  • Between 2010 and 2019, the total number of AI papers on arXiv increased 20 times

The report is compiled by the Stanford Human-Centered AI Institute in collaboration with people from OpenAI, and originated in 2016 as part of AI 100, a century-long Stanford study of AI’s progress and impact.

“What we set out to do was to be religious about the quality and objectivity of the data,” Stanford University professor and steering committee chair Yoav Shoham told VentureBeat in a phone interview.

Shoham has been on the AI Index steering committee since the beginning and acted as chair of a group that put the report together. Others include MIT economist Erik Brynjolfsson, Partnership on AI executive director Terah Lyons, and others from SRI International, Harvard University, OpenAI, and the McKinsey Global Institute.

The work is intended for the general public to understand progress in the field, inform policymakers about how their country ranks compared to other nations, as well as business decision makers.

Now in its third year, the report has 3 times more data sources than at its launch, authors told VentureBeat, and for the first time comes with a Global AI Vibrancy tool, a way to compare countries across 34 axes.

Shoham called it premature to make national AI rankings as some previous works have done.

“It’s tempting to just do a ranking of countries, just measure some things, add a bunch of numbers, and say, you know U.S. is number one and China is number two, and what have you,” He said. “We didn’t want to do that because when you do that, you distort things and there’s so many dimensions you could look at. And eventually, it’s a good idea to have something like a ranking but we think it’s way premature to do it.”

The Global Vibrancy tool gives the choice to measure by overall numbers as well as per capital trends to recognize hot spots in places like Israel that produce more per capital deep learning research than any other country or advanced AI leaders like Finland and Singapore.

Earlier this year a consultancy firm working with the United Nations determined roughly 30 nations currently have national AI strategies.

For example, according to Elsevier’s Scopus that looks at publication rates for repositories like arXiv, Europe produces more AI research papers than any other part of the world but Israel has the highest per capital deep learning research and the United States produces the most cited AI research.

Corporate or industry affiliation with AI research is growing, and is most likely to occur in US, China, Japan, France, Germany, and the U.K.

“10 years ago, 20 years ago, all innovation happened in academia, and then industry picked up bits and pieces of it, perfected it and commercialized it. That’s no longer true. The lines are blurred and people cross over,” he said. “I think the leading academic institutions are coming to terms that this is the new normal.”

Though 60% of PhD candidates go to industry over academia today compared to 20% in 2004, academic research still outpunch government and corporate papers, and makes up 92% of AI publications from China, 90% from Europe, and 85% from the U.S., according to the report.

The report also assesses progress in benchmarks and methods to track AI across disciplines like image classification and progress in methods train AI systems for common use cases like translation or ActivityNet for event recognition in videos.

In some regards, Shoham says progress results are mixed, as some AI systems that achieve high results in a benchmark may prove to be more brittle than those results may indicate.

Shoham looks to work in conversational AI, his field of research, for an example. Some systems may perform well on a benchmark like Stanford’s SQuAD question and answering test but appear to be overfit to narrow tasks.

“The thing is these are highly specialized tasks and domains and soon as you go out of domain, the performance drops dramatically and the committee knows it,” Shoham said. “There’s a lot to be excited about genuinely, including all these systems that I mentioned, but we’re quite far away from human level understanding of language right now. So we try to be nuanced about that in the report.”

The report also cites instances of human-level performance by AI systems such as DeepMind’s AlphaStar beating a human in Starcraft II and detection of diabetic retinopathy in images of eyes using deep learning.

Source: Read Full Article