Robin.ly held its semi-annual conference on AI commercialization: Trends and Challenges on June 1, 2019. The event took place at the Computer History Museum in Mountain View, California, between 1pm to 6pm.
Renowned AI tech leaders with extensive industry experience discussed current AI applications and the pathway to commercialization across different verticals. Featured guest speakers include Tao Wang (former Co-founder of Drive AI), Dr. Dileep George (Co-founder and CTO of Vicarious AI) and Dr. Yang Liu (Head of AI Lab at LAIX). In addition, Louay Eldada (CEO and Co-Founder of Quanergy), Ruslan Belkin (CTO of Nauto), and Nalin Gupta (Co-Founder of Auro, acquired by Ridecell) joined Tao Wang in an inspiring panel discussion about the future of transportation.
The conference was led by Alex Ren, founder of Robin.ly and TalentSeer. As Alex explained during his opening speech, the purpose of this conference and of Robin.ly is to “build a comprehensive community to empower the next generation of AI Leaders, so we can help them succeed. We created this platform to connect engineers and researchers, business leaders and investors, to share, learn and get inspired by each other”.
The focus of the conference was to take a deep dive into the world of AI, questioning and examining the latest trends through the lens of commercialization. Tech leaders shared their experience and insights across difference verticals during our featured talks.
Tao Wang, the former Co-Founder of Drive.ai, discussed the challenges and opportunities created by autonomous driving.The industry of self-driving vehicles has come far, with Level 3 technology reaching a state of maturity and Level 4 technology undergoing heavy research. As of currently, there are still issues which need to be discussed and solved with regards to commercialization of Level 4 autonomous vehicles, such as how to provide a higher guarantee of avoidance to dangerous situations. Tao Wang explained that this is largely an issue of motion planning, which is the process of teaching AI how to navigate its physical surroundings in the most efficient and safest manner. Motion planning involves getting AI to understand and interpret maps, as well as understand how to react to sudden changes to its environment.
One of the most interesting challenges faced is that the deep learning algorithms used for this technology employ a black box method. This means that only the inputs and outputs are knowable, with the processes used never being revealed. The processes of these systems are either unobservable or unexplainable in regards to our current understanding of computer science.
Another challenge, or limitation, of deep learning is that of the Long Tail Problem. This is where each rare occurrence essentially needs substantial quantities of data to either make sense of it or to successfully avoid it. In relation to autonomous driving, Tao Wang used an example of a car coming into contact with a group of people wearing dinosaur costumes. While an AI can be programmed to avoid hitting people, there are still factors which might confuse it into not recognizing that they are, in fact, people. The problem is that each individual rare occurrence exponentially increases the amount of data needed, and because rare occurrences are practically infinite in variety, this makes the data needed practically infinite too.
There is also a cost factor to consider, as Level 4 autonomous vehicles are expensive from both a research and operations perspective. This does not, however, mean they are unviable. Each autonomous vehicle must be equipped with more devices and processors than a standard modern car, but also the extensive research required brings the price substantially upwards. There is also a necessary question of legal liability and compliance to road laws which needs to be examined in full. In the US, on a federal level there is very little legislation which has been crafted with autonomous vehicles in mind, with almost all laws being relevant to only human drivers. However, in 2015, the Surface Transportation Reauthorization and Reform Act was passed, which expressed a level of approval towards the autonomous driving industry, but only from a research angle. Before commercial autonomous vehicles can be approved country wide on a Federal scale, the US Department of Transportation still needs to make significant changes which allow for non-human drivers to be considered.
Dr. Yang Liu, Head of the AI lab at Liulishuo, presented her views on the intricate relationship between language and AI. Liulishuo is a company focused on forwarding the AI industry from a language and linguistics perspective. Yang has published over 140 scientific peer-reviewed documents, and was also an associate professor at the University of Texas. Robin.ly has interviewed Dr. Yang Liu previously.
These two areas have always been of particular interest to the world as they are necessary for allowing humans and AI to communicate between one-another. When AI manages to grasp speech, dialogue, and linguistics at the same level as we can, there will no longer be a barrier between the two. Dr. Yang Liu covered the several methods used in this field, such as deep neural network (DNN), recurrent neural network (RNN), convolution neural network (CNN), and long short-term memory (LSTM). A lot of this technology is already sufficiently commercialized with tools like Siri, Alexa, and Google Assistant, but these simply scratch the surface of how complex language processing can get. Dr. Yang Liu’s focus is on making the technology even more intricate and versatile.
Dr. Yang Liu also discussed the importance of NLP (natural language processing). NLP is the concept of getting programs to recognize, understand, and interpret human languages. The goal is to get AI to treat language in the same way as humans do, or at least as close as logically possible from a syntactic perspective. NLP can be used to aid with numerous problems such as speech recognition, reading comprehension, automatic language generation, and helping with presentations.
In her presentation, Dr. Yang Liu talked about BERT, Google’s 2018 AI paper, standing for Bidirectional Encoder Representations from Transformers. The most important part about BERT is that it successfully applied bidirectional training to language modeling, making it an innovation in the field of NLP. What this means is that transformer encoders (programs that read natural languages) are now able to read entire blocks of text all at the same time, rather than reading them from left-to-right or right-to-left. This greatly improves the comprehension beyond that of other systems.
BERT’s main asset is that it is highly capable of making predictions within texts. Through Masked LM (language model), BERT can predict 15% of the words used in a sequence, and through Next Sentence Prediction (NSP) it can learn to predict the next sentence in a paragraph with reasonable accuracy. This is huge for the NLP field as it reveals that there is an AI tool which can treat language in a way that is somewhat similar to how humans treat it.
Dr. Dileep George, CTO and Co-Founder of Vicarious AI, talked extensively about how we can make artificial intelligence more intelligent. Dileep George is also the Co-founder of Numenta, a neuroscience and machine learning organization, along with Jeff Hawkins. Dr. Dileep George explained that AI in the current day is considered as “old brain”, meaning that it functions in a similar fashion to reasonably unintelligent animals such as birds, amphibians, rodents, and other creatures which evolved early in terms of natural history.
Animals which we consider to be “old brain” are lacking in areas such as vivid imagination and sophisticated reasoning skills. Dileep George notes that while these animals are deficient in many necessary skills, they are still highly competent in their specific niches. This is where AI is currently stuck at, which is still impressive to say the least, but what the industry is focused on now is reaching a status of “new brain”, where AI instead functions closer to primates, dolphins, and whales.
To do this, AI needs to get smarter. Dr. Dileep George is well versed in both AI and neuroscience research, giving him a unique grasp on the task at hand. He believes that studying the neocortex and its function is the key to forwarding AI. A deeper dive into how living creatures’ function is paramount to creating powerful and multi-faceted AI. This is because for human intelligence and the intelligence of some animals, almost all the brain’s most impressive and sophisticated attributes derive from the neocortex. As Dileep explained, the neocortex is quite literally the “new brain”, with our more basic and simple instincts deriving from other locations in the brain.
In 2016, Dr. Dileep George and Vicarious made a huge breakthrough in the world of AI and machine learning by developing a new neural network which was able to solve CAPTCHAs, while only using less than 1000 examples. The Recursive Cortical Network (RCN) is able to do so because it is structured in similar fashion to how the visual cortex works in the human brain. To understand the significance of this, it helps to compare the RCN to Google’s Convolution Neural Network which was also able to solve CAPTCHAs, but needed millions of example images beforehand to do so.
An additional three autonomous industry leaders joined Tao Wang and Alex Ren in the panel discussion. Louay Eldada is the CEO and Co-Founder of Quanergy, an AI remote sensing technology organization. Quanergy’s tools are used to detect and map environments with precision through LiDAR sensors. Ruslan Belkin is the CTO of Nauto, an AI focused fleet safety service. Its technology focuses on aiding distracted drivers by alerting and coaching them as a means of preventing collisions and dangerous accidents. Nalin Gupta is the Co-Founder of Auro, a level 4 autonomous driving company which was recently acquired by Ridecell, an end to end car-sharing service. And Tao Wang is the former Co-Founder of Drive.ai, a self-driving on-demand vehicle company currently operational in Texas.
The topic of the panel was on the future of transportation, with members sharing their judgements and views on the current state of AI and travel, and how it can be explored from a financial and entrepreneurial perspective. When asked, Louay Eldada explained the inevitability of the technology, stating that “autonomous vehicles are going to happen soon, regardless of different opinions”. Nalin Gupta agreed with this view, adding that he “won't put a specific time on when we will see autonomous vehicles because there are so many arguments and opinions about it. But yeah, in some cases, we will see autonomous vehicles soon”.
The significance of urban infrastructure was further discussed, to which Louay Eldada shared his view that “as a prerequisite to having fully autonomous vehicles, you need vehicle connectivity” especially within cities. Both Nalin Gupta and Ruslan Belkin agreed, with Belkin adding that “it is going to happen outside the United States because we don't invest in infrastructure, and therefore nothing's going to happen in terms of smart cities”. However, Tao Wang took a different approach saying that we need to think about "the highest leverage in terms of changing the infrastructure”, meaning that minimal infrastructure amendments may be sufficient enough and more desirable than larger-scale changes in some cases.
A hot topic at the panel was that of Elon Musk’s disapproval of LiDAR sensors, in a statement he said that “Anyone relying on LiDAR is doomed”. Louay Eldada began by calling it “a comment which almost does not deserve a reply”, adding that “the fundamental premise of his argument, which is that they are too expensive for the job, is completely wrong today” as LiDAR can now cost only “a few hundred dollars”. Ruslan Belkin took the view that “it depends on how much risk you willing to take, I think Elon is willing to take more risk than other people”. Tao Wang added that “anyone who only uses cameras will not be able to eventually remove the driver” further noting that “right now, I don't see another sensor that can beat LiDAR in its own merits”.
Perhaps the most pressing question of the conference was that of how to promote safety without sacrificing innovation. Tao Wang provided a strong opinion on the matter, asking “is the current testing [used by autonomous vehicle manufacturers] procedure safe?” He elaborated by saying that “the direction the industry should push to is towards more standardized tests, but in private environments, such that you can test a bunch of different scenarios and not jeopardize the complexity of these scenarios”. Nalin Gupta explained his view on the “tug of war between innovation and safety” by discussing an example: the autonomous driving company GM Super Cruise. “GM Super Cruise is a very good example [of both innovation and safety] where they've rolled out an autopilot system. But unlike Tesla, they are doing it in a much more responsible way where they have an IR camera which is tracking the state of the driver. So in that case, you cannot fool the car by just putting your hand on the steering wheel, but really looking out the window.”
The Robin.ly semi-annual AI Commercialization: Trends and Challenges conference was a significant day for the Artificial Intelligence industry, as it provided both expert insights as well as networking potential. Not just did the conference analyze the world of AI as it currently is, but it also explored its possibilities for the future.
AI Talent News Roundup January 2020 Part 1: Top AI Leaders Predict Industry Trends, MusashiAI Launches Autonomous Robots for Hire, Sports AI Company Hires Two Strategic VPs, Robotic Pizza Startup Zume Plans to Lay Off 400 EmployeesRead more
This is the TalentSeer AI News Roundup for the second half of December 2019. You can find the most relevant and recent news about the AI talent, aimed specifically at tech leaders, VCs, and members of the AI field. Subscribe here to receive these roundups twice a month via email.Read more