The challenges of commercialising AI research in Australia

Jonathan Chang, Silverpond CEO, talks to Dr Subhash Challa, CEO at  SenSen Networks, about how to connect AI research with the real world, and why Australia is the perfect country for AI start-ups.

Jonathan: Please introduce yourself and SenSen Networks.

Subhash: I am the CEO of SenSen Networks and founded it in 2007. The focus of our company from day one was how to extract data that is embedded in video and other information streams to improve business operations. Most people think CCTV means security privacy, but really it is like the eyes of a business owner. They want to keep track of how things are going in their business, where they can improve, where they can add value, how they can further improve business operations, how they could automate some things. So there are lots of opportunities if you can turn video into accurate business information that businesses can consume. We also integrate other sensor data, like GPS data or in-depth information like LIDAR. So it is not just restricted to video, although video is more dominant because it is rich and captures a lot of things that people intuitively and naturally understand. Combining these information sources creates opportunities to solve problems that were once considered impossible to solve.

SenSen Networks was listed on the ASX 10 years ago and its market cap now is around $120 million. We employ 140 people, with over 100 people in India. Our R&D team is in Melbourne, which is where I am located, and we have a corporate office in Sydney. We also have sales support offices in Singapore, the USA and Canada.

Jonathan: It sounds like you have a global view of AI’s capability. What is your sense of the pace and potential?

Subhash: A particular technology development happened in 2014-15. A group of scientists and engineers came up with a way to make deep neural networks learn in a reasonable amount of time — the I-beam neural network. At the same time, graphics processing units (GPUs) were getting faster and faster. So there was a serendipitous marriage between the ability to train neural networks more deeply on GPUs that were becoming faster. Then they could be applied to a bunch of different problems that are traditionally very difficult, in voice recognition, image recognition, pattern recognition and so on. It was immediately recognised that this could make computers learn very fast and do some tasks that were difficult to do before.

There is some hype around AI and what it can do, but if you take the hype out, the real AI opportunity is using deep machine learning to solve real world problems that in the past were traditionally very difficult to solve. And there are a lot of problems in the world that are traditionally hard to solve, even using proper training and clever thinking from computer scientists and engineers. This opportunity is massive because you are not inventing any more in that space. You are effectively trying to solve old problems with a new tool.

Jonathan: Does this mean we can now solve every problem in the world?

Subhash: Everything could become a bit better, and the need is universal, but AI is not a panacea for everything. You cannot solve everything with AI. For some things, humans still need to get involved. And both AI and a computer’s deep learning are not yet at a point where unsupervised learning is possible. It still has to be highly supervised, and you just need to be clever at how you train things.

I think a lot of hype is going on in Singapore around AI being able to solve old problems, so there are AI initiatives with good budgets to spend on Smart Cities, Smart Enterprises and related initiatives. Singapore is an early adopter, so SenSen Networks has been there for a while. Australia and Canada are other early adopters. The USA is surprising; there are areas where they are very advanced and others where they are really behind. You think everything the USA does is advanced, but we go into their smart cities and they are not. There are driverless cars and Elon Musk is sending up rockets, but this is way too advanced for what the world needs now.

“The real AI opportunity is using deep machine learning to solve real world problems that in the past were traditionally very difficult to solve.”

Jonathan: How do you see the capability differences between these various places? You have set up an international team so there would have been decisions about where to locate them. And I guess there is a shift now towards remote work. So there is probably even more incentive for SenSen to consider hiring from a truly global talent pool.

Subhash: Obviously, if you are solving old problems with new tools and using some tricks of the trade, that opens up a massive commercial upside to any company that figures out how to do it. Over time, it will become democratised so everybody can do it. But at this stage, only a few people have the knowledge. That talent pool is very sparse, so these people will make a lot of money. We need a talent pool with a deep understanding of AI. Not only from a machine learning, deep learning point of view, but signal processing, data analysis and other related fields. People who are in that space have a great advantage because they are solving these problems for the first time, so they will charge a premium. Products will attract a premium, although that is balanced by them removing a cost in a business or society that was being borne previously by very inefficient methods.

It’s not just about being technologically smart. We need people who know the tricks in the front end, can take a business problem, formulate it in the context of AI, figure out what needs to be done, then outsource the related activity to someone and work with that person. That class of worker is extremely rare, even in big companies like Google and Facebook.

Jonathan: Over the last 3 to 4 years, governments and businesses have spent a lot of money trying to educate and train data scientists and machine learning engineers. Do you feel like this has not raised them to the level of competence you think is really needed in the industry?

Subhash: I do. The government is providing funding and there is a lot of hype, so people want to get into AI. They are doing degrees in, for example, data science. This needs a deep understanding of statistics but instead, students are using a programming tool like MATLAB. That is not data science. That is just clicking some buttons. You must have insight. What is a correlated variable? How do you formulate the problem? When you have a problem to solve, a lot of people do not know what they are doing. Having a degree does not mean anything.

“You must have insight. Having a degree does not mean anything.”

Jonathan: This touches on a topic I am personally interested in, which is the challenge we face as an industry to counter the hype. How does the market trust some of the claims that are out there? SenSen competes in the roads and parking and building sectors, for example, and there must be many other companies, both start-up and medium and large, that are making claims about their products. How do we manage that?

Subhash: All serious clients now conduct head-to-head competitions. They say to whoever is making a claim about problem-solving to prove it. Put it in our environment and let it work. Implement it on a small scale. If it works, then we pay. And depending on how large the scale of the opportunity is, businesses might be prepared to take a punt and do it.

That is a typical procurement process nowadays. In any kind of serious AI application, you have to survive the test. And that means a real problem has to be solved better than doing it manually or any other way and it has to be done better than your competitors.

Jonathan: In some senses, many of these customers are not sophisticated enough to run a true evaluation with an accuracy metric. Do you think that, as an industry, we should move towards standardisation?

Subhash: This could be helpful, but the actual accuracy definition varies across the board for different kinds of applications. It is very difficult to come up with a standard as there will be thousands of niche products. Saying my AI solution will work 99% does not mean anything.  Also, AI works in the context of an application. What activity can you do and what are the operating conditions? So a standard must be specific to the application and operating conditions.

You might be working, for example, in the roads space. Lighting, night time, daytime, speed, how fast things are moving, all these things matter. So how will you run a test session? You can give a bunch of faces and match it with the real world, but that has many different complexities. We just completed a trial with a government agency where they wanted to detect people using smart phones when driving. We had a camera system mounted on a gantry on the side of the road looking into the vehicle windshields. If people are driving vehicles over 100Kmph, how can the system pick whether drivers are using their phone or not? In that situation, what is accuracy?

Jonathan: So what is the best approach?

Subhash: Define a testing methodology that businesses can implement in their own evaluation. So if you are looking for matching, if you are looking for detection, whatever you are looking for, you want the user case to satisfy these criteria. That would potentially standardise tests because we could give a tool to the people who doing the evaluation. This is how I would test multiple vendor systems.

“It is very difficult to come up with a standard as there will be thousands of niche products.”

Jonathan: Do you think there should be an industry-wide agreement on quality? I suspect some industries would have to get together to agree on certain standards.

Subhash: I agree. Accuracy is very misunderstood and is confusing for many people. For example, with one customer we are tapping into their traffic networks and the network creates alerts when the vehicle breaks down or somebody is driving the wrong way. The alert is part of an AI system. The system generates so many false detections and false alarms that they have to employ 2 or 3 people just to reject them.

We designed a false alarm rejection AI system for them, the automated, AI-based False Alarm Reduction and Management System, or AI-FARMS. We learned about all the false alarms and true alarms being generated then created a detector that rejected false alarms. This has accuracy challenges too, because it will not “reject” all false alarms and will not “accept” all true alarms. But overall, the number of false alarms has gone down. We squeezed 99% of the false alarms out of the system, but there is still a 1% false alarm rate because not everything can be filtered out and some of the true alarms might be rejected as false alarms when all alarms are passed through the system. So should AI-FARMS be rejected? No. Firstly, the original system has its own accuracy issues because it detects false alarms when there is nothing there and misses things when something is actually happening. Then we have our false alarm system, which also has inaccuracies. Our client struggled to understand what accuracy means in this context. We had to really put it out there and say, you are wasting three hours a day looking at false alarms, but our system will reduce it to three minutes. The productivity we have delivered and the accuracy we delivered will not negatively affect your operations, they will improve them. That is the definition of value and accuracy in that context. So it changes from user case to user case.

It is great to keep publishing white papers on how to measure accuracy for different things. We will let the industry adapt them and other people comment on them and say, oh we can improve this accuracy measure a little bit more.

Jonathan: I was thinking of working with, say, Standards Australia to provide some guidance on standards around these things, in the hope of introducing trust. There are a lot of cowboys out there making wild claims about their systems. When you were measuring these false positives and false negatives, I presume there were related costs. How do you manage these costs? Do we rely on external verification by humans to help manage the risk of a bad detection causing someone harm?

Subhash: When the AI system makes an error, there is a cost. In terms of managing it.

I think it is a very slippery slope when you cannot take liability for your software, whatever it is doing. So right now, and into the near future, AI should only ever be used to solve problems that are not mission-critical, which are not life or death.

Let us say somebody sells you a system they say will alert you when there is a fire at your home. That is their mistake. If your house blows up, you can go to the sensor company and say your system was supposed to pick this fire, but it didn’t, so you have to pay me a million dollars for my house. So as a vendor, you will only ever say you will alarm the customer’s house and alert them, but they still need other ways of checking. This will probably provide some comfort to them if the system works really well. Maybe it catches 90% of the time when a fire is about to happen. Maybe another time, it does not, but right now, without the system, you have no idea whether a fire is coming or not. But with the system, things are 90% better.

We must show the customer we are doing something and moving them forward. We have replaced the whole system. But that does not mean you go to sleep and do nothing yourself. This is one of the reasons driverless cars are stuck right now, even though technologically a car can drive itself, at least on some roads. The biggest issue is if it hits someone. Is it the fault of the car’s brain? Or is it the fault of the person who was suicidal and jumped in front of the car? It becomes nobody’s problem. That liability is what is holding the industry back.

It’s possible we might have the technology to solve a problem but can never implement it. There are lots of problems caused by humans doing dumb things where we can improve productivity and safety using a better way. The cost of an error should be a percentage of what the organisation was doing before it put in this technology.

In some sense, we are reducing the cost of bad things happening to customers, not eliminating them. Nobody supports a system where the vendor says their system will eliminate the whole cost.

“Right now, and into the near future, AI should only ever be used to solve problems that are not mission-critical, which are not life or death.”

Jonathan: I think it is true that there are some unrealistic claims around the perfection of these systems.

Subhash: AI is a great tool but you cannot take it all or throw it all out. Accept there are many great things that can be done, but do not expect it to be magical.

Jonathan: Involving humans in the loop to effectively oversee these systems is needed.

Subhash: Those kinds of applications will be the most successful. They will generate a tangible and significant reduction in effort. Something that would take one person a whole day to do might only take them an hour, with the rest of the task automated using an AI-enabled machine.

Jonathan: What do you see as the biggest challenge facing our industry? Is it talent or technology or market regulation?

Subhash: I think we are too slow. There are too many problems to solve and too few people to solve them.

Jonathan: There is a lot of university adoption of AI, plenty of PhD graduates are emerging, and academic researchers are increasingly applying deep learning to their research projects. Is that enough to solve all these problems?

Subhash: No, it is not that simple. I mean, of course, there will always be very clever people and universities are the breeding ground for that. But most people are just getting a degree because they want a job, and in the AI area, they can get a job. The main challenge is that they are extremely disconnected from industry. They do not know what the real problems are. The drivers and motivation and how they get rewarded are all pivoted towards academic glory. I ran that race myself some years ago. I did great work as a Professor with my team and wrote a book and many academic papers. But nobody reads them. This is happening to hundreds and thousands of researchers in so many thousands of universities, and it is such a waste of effort.

As a PhD student, you are learning how to write research papers, and that is good because you are in the training phase. But in the end, you just know how to write papers. Yet there are so many real industry problems without the people to solve them because everyone is too busy doing business to step back and think about problem-solving. It is a massive disconnect.

Jonathan: Do you think the situation will change in Australia because the university system can no longer rely on foreign students to bankroll them? The Minister for Education is now suggesting they should be looking at trying to commercialise more research.

Subhash: You cannot commercialise research ideas using the current model. When I was in academia, I was a big critic of taking IP out of the university and commercialising it. I did some work at a university and that idea became the foundation for my startup business. At that time, most universities (although not mine) were saying that in this situation, the academic institution wanted to be 70 percent stakeholders and inventors got 30%. Then you take your idea to the venture capitalists (VCs). Series A, Series B, Series C, series D of capital raising. Everybody takes 25% to 30% along the way. What is left for the founder — 1%? Why would the founder be motivated by that?

We are setting up research commercialisation in a way that becomes unfundable by VCs. It is completely wrong. Yet we have the idea that academic brains are so great. We have to unlock that value.

“AI is a great tool but you cannot take it all or throw it all out. Accept there are many great things that can be done, but do not expect it to be magical.”

Jonathan: You have obviously taken a lot of personal risks as an entrepreneur. Is that a challenge that academics might find difficult?

Subhash: Yes, it can be. Entrepreneurship really is all about risk-taking. I left my academic area and put my house on the line. I pivoted myself from writing unreadable research papers to having a business conversation to close a deal. I had to fundamentally transform myself into a different person. I did it because my house was on the line. It forced me to think like that. I could not go back because if I did, my house was gone under the business model.

You need a risk-taking culture for entrepreneurship and commercialising IP to work. Australia does not give you the chance to take a risk. It protects you. There is no incentive to take risks, so people are comfortable in Australia. There has to be some chance of losing something for a person to fight hard to not lose it.

What we need to create in Australia is an environment where we back people who take a risk — but only the people who are actually taking the risk. When you raise investment capital, you are sharing the risk and sharing the rewards. Investors are sharing a risk you have already taken. If your risk is zero, they are not sharing anything. At the moment, being an entrepreneur feels like too much risk for the money. That is the problem. If we solve that, there is an opportunity to create an ecosystem of entrepreneurs. 

Jonathan: There is a next generation of young Australians coming up who are building AI startups. Do you feel Australia is ready to enable a local AI startup ecosystem?

Subhash: Things are definitely much better than when I started, when there was nothing. It was hard to raise money then. In the last 3 to 4 years, a venture capital pool has been created by people who made money in other places. They are providing funds to back breakthrough ideas. I pretty much gave up on the venture capital route and went down the public capital raising road.

Actually, Australia is a very good place for startups. For one thing, it is the right size for all kinds of early-stage adoption. Many big companies want to test an idea on that scale. There are 20 to 30 million people here so companies can try out different concepts. There are all kinds of variables to test before they hit the US and Russia and other big markets. Our local startups already have that risk mitigated because they can build something here then take it to the world.

Australian government initiatives like giving special visas for good talent are really good too. It attracts some great people who are high profile in their field.

Jonathan: Are you seeing that happening now that more overseas talent is trying to migrate to Australia?

Subhash: Definitely. Top people want to have a safe environment for their kids and also work in a very high-tech space. We have that combination in Australia, and it is unbeatable. By marketing that, we could attract a lot of high-quality people. Nobody can deny that Australia has a great medical system, fantastic lifestyle, great weather and is the right size to try out any startup idea then sell it to the world. I mean, what a busines case!

I think Australia should be promoted more to overseas talent as combining a great lifestyle with being a good home for startups.

“Australia should be promoted more to overseas talent as combining a great lifestyle with being a good home for startups.”

Jonathan: Do you think the idea of conducting commercialisation and testing in the AI space in Australia then selling to the rest of the world could be difficult?  Overseas situations are different in terms of regulation, for example. Face recognition is a good example. There is a different kind of race overseas and you cannot necessarily just migrate your product.

Subhash: AI is all about automation. It’s taking data then doing something with it. And that technology can definitely be developed here. But the reality is, you must be in a position to adapt your product from customer to customer, even in Australia. We have over 30 city council customers. In my first meeting with a new council customer, there is always someone saying this council is very different from any council in the world. It’s the same with other sectors. Even casinos, we hear the same thing. So all players are very different, or perceive themselves to be. We have to be mindful that whatever AI system we build must be adaptable to different cultures, markets, operating conditions and so on.

This situation is not unique to Australian companies. It exists for American companies and in other countries. In terms of building in Australia and going global, SenSen is a prime example of where we did just that. Whether I am selling in Las Vegas, Chicago, Canada, Europe, the Middle East or Singapore, everything is built in Australia and tested here, and trialled here before it goes anywhere else.

People do not expect a giant company to come from Australia. Atlassian has helped to change that, but we are not yet in a place where people in other countries think great companies come from Australia. I believe Australia has a phenomenal future if we play it right, because this is one of the best places in the world to be

Jonathan: Do you have any advice for people starting an AI career today?

Subhash: My main suggestion is to get the basics right. AI is going to be here for a while, so be more than the tools person. You can learn that on YouTube. Even lecturers are focused on tools. They can tell you how to train a deep neural network, but you do not need a degree for it. You can just watch a video. If you appear to be just a tools person, that’s how you will be treated. Instead, be someone who can use AI to make a big difference. Then you will be valued.

Understand the foundations and go beyond deep learning. Deep learning is just one aspect of AI. You should know signal processing; you should know statistics and all kinds of linear algebra. Real AI students know all the foundations that make up AI and how to apply them to real-world problems. They get the trade-offs and challenges. They appreciate the complexities around understanding accuracy, and how a system can recover from an error. And that AI will not always be perfect. This is a different space to standard transaction-oriented systems that people might have seen before.

It requires what I call RI — Real Intelligence. AI is what machines do. Real thinking is what humans do. That is a core skill and young people should focus on that.

“If you appear to be just a tools person, that’s how you will be treated. Instead, be someone who can use AI to make a big difference.”

Dr Subhash Challa is the CEO and founder of ASX-listed SenSen Networks Inc. He led their technical team to develop the world’s first configurable data fusion software platform, SenDISA. This ground-breaking platform can be configured on demand to meet the requirements of a range of uses needing multi-camera, multi-sensor data fusion and data analytic solutions.
Dr Challa has a PhD in Aerospace and Electronics Systems and has had a distinguished academic career.
 
He held Professor and Senior Principal Scientist positions at NICTA and the University of Melbourne and was Professor of Computer Systems at University of Technology Sydney (UTS). He was a Visiting Scholar at the Harvard University Robotics Lab and a Tan-Chin-Taun Fellow at Nanyang Technological University in Singapore.
Robotics, multi-sensor technology and object tracking are specialist areas for Dr Challa. He was Research Leader – Centre for Autonomous Vehicles and Robotics and Director, NeST –Networked Sensor Technologies, both at UTS. His work has been published widely in academic publications and textbooks on these and related topics.