Anton van den Hengel’s journey from intellectual property law to computer vision pioneer


Anton van den Hengel, an international pioneer in computer vision and its many applications, departed the University of Adelaide in South Australia to join Amazon as director of applied science in April 2020. He is creating a new, world-class machine-learning hub in Adelaide and supporting Amazon’s business through the development and application of state-of-the-art computer vision and scalable machine learning.

Related content

Senior principal scientist Aleix M. Martinez on why computer vision research has only begun to scratch the surface.

In 2018, van den Hengel was the founding director of the Australian Institute for Machine Learning (AIML), Australia’s first institute dedicated to machine learning research. When he left to join Amazon, AIML was 140 people strong and near the top of the institutional world rankings in terms of computer vision research. He remains the part-time director of AIML’s new Centre for Augmented Reasoning, whose mission is to build core Artificial Intelligence (AI) capability in Australia.

Van den Hengel has authored more than 300 research papers, commercialized eight patents, and been chief investigator on research projects funded by many Fortune 500 companies.

But it could all have been so different. The young van den Hengel first got into computer science simply to support his efforts to become an intellectual property lawyer. In fact, he completed his law degree.

Amazon in Australia

Research teams in Adelaide are developing state-of-the-art, large-scale machine learning methods and applications involving terabytes of data. They work on applying ML, and particularly computer vision, to a wide spectrum of areas.

“I’d bought the suit, tie, and bright white shirt and was all ready to start my first day as an entry level lawyer,” he recalls. “Then, instead, I turned around and went straight back into the University of Adelaide. I spent the next couple of decades there.”

What followed was a master’s, then PhD in computer science and, ultimately, building up the University of Adelaide’s forerunner to AIML, the Australian Centre for Visual Technologies.

The chance to have an impact

What turned van den Hengel around was the chance to study computer vision.

“I saw the opportunity to engage with something that I realized was going to have incredible impact,” he says. Computer vision and its applications are everywhere today, but in the early 1990s, things were very different. “It’s hard to believe now but at the time there were maybe 1000 people in the world working on computer vision, at a time when there weren’t any digital cameras,” he reminisces. “Most papers in CV were at least half about how people had taken the images.”

[In the early 90s] there were maybe 1000 people in the world working on computer vision, at a time when there weren’t any digital cameras. Most papers in CV were at least half about how people had taken the images.

Van den Hengel understood that humans are primarily visual animals and he clearly saw the inevitability of computers using vision to sense, and ultimately interact with, the world. “But back then, having a computer that could actually either measure or impact upon the real world was virtually unbelievable,” he says.

Since then, he says, computer vision has transformed from a heavily mathematical field with 300 people at every conference who all knew each other, to conferences of many thousands of people and auditoria full of companies trying to attract staff and sell things.

“The economic value of computer vision has gone through the roof,” he says.

Computer vision is a fundamental technology, van den Hengel says, because it relates the real world to symbols. “Humans reason about things in terms of symbols, so ‘cat’, ‘sky’, ‘car’, ‘road’, and ‘fish’ are all symbols, right? Computer vision takes visual signals from the real world and relates those signals to symbols,” he says.

That’s been the critical missing piece of the puzzle. For decades it was predicted that by the year 2000 we would have robots doing the housework and many other ‘magical’ things, but we came up short because there’s an infinite variation of things out there in the real world and it’s much harder to get a computer to reason about our physical environment than anybody imagined.”

Looking for answers

This missing piece is tackled by a subfield of computer vision known as visual question answering (VQA). The idea is to enable computers not only to understand the content of an image (or video/livestream) in a more semantic, human-like way, but also to answer questions posed in natural language about that image. For example, “Where was this photo taken?”, “Does it look like the person on the picnic blanket is expecting someone?”, “What’s the color of the dog nearest the stop sign?”.

Van den Hengel is the world’s most-cited researcher in VQA by an enormous margin, with close to 22,000 citations.

Fireside chat: Anton van den Hengel and Simon Lucey

“I got into it very early because I saw it as a threshold change in the way that artificial intelligence works,” van den Hengel says. “What’s interesting about VQA is that you ask the question at run-time and need the answer immediately, so it needs to be very flexible, unlike current machine learning applications, which are often fixed, single-purpose solutions to specific problems.”

In other words, it needs to be closer to true artificial intelligence – often referred to as artificial general intelligence.

In that vein, imagine a robot that could follow natural-language instructions, based on a greater understanding of what it sees around itself. It’s a sci-fi dream, but for how much longer?

In 2018, using a vision-and-language process similar to VQA, Van den Hengel and a team of colleagues from across Australia developed a simulator that uses imagery taken from the inside of real buildings to teach virtual agents to successfully navigate using visually grounded instructions, such as: “Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.” It is only a matter of time before we can talk to our self-driving cars in a similar manner when necessary, says van den Hengel.

The power of neural networks

Rapid developments in machine learning are behind the recent supercharging of computer vision research.

“In the last 10 years of computer vision, we have essentially trained deep-learning neural networks to replace all of these lovely computer-vision algorithms that we’d previously come up with for solving a whole bunch of problems,” he says. “In fact, neural networks are so much better at it, they went from being just an interesting solution to a puzzle to being a practical solution to some of the core challenges we face.”

While at the University of Adelaide, van den Hengel has applied advances in ML and computer vision to make the world better in a variety of ways. These include working with Adelaide-based medical technology company LBT Innovations in creating an automated pathology machine called APAS (Automated Plate Assessment System) Independence, which can screen and interpret high volumes of pathology plates.

“There’s a shortage of trained pathologists, partly because it’s not a lot of fun sitting all day doing chemistry and looking at samples. APAS does the drudge work of the visual inspection process,” he says. The device was FDA approved in 2019.

Beyond computer vision, van den Hengel is currently the chief investigator for the Australian National Health and Medical Research Council’s Centre of Research Excellence in Healthy Housing, which is using ML to help deliver better outcomes within the Australian housing system, not only in terms of housing, but also in terms of health.

“People who are homeless suffer diseases and injuries, which put them into hospital, and homelessness can see people spiral into a set of difficult conditions that are very expensive for society to address,” he says. “It’s actually cheaper to house somebody than to fix the impact of homelessness. So where can we intervene in the housing process in a way that benefits everybody and also saves money?”

Not all of van den Hengel’s work is quite so serious, however.

The paper I’m most happy about but that gets the least recognition is one that tells you how to build real Lego models of objects in images,” he says. “It’s got brilliant maths in it; some of my favorite maths. And it incorporates gravity, structural considerations and, you know, fantastic maths.” And did he mention the maths?

Van den Hengel has even used ML to design an IPA beer.

“Collecting the data was a real trauma: we had to drink, and rate, a lot of beer,” he laments. He named the resulting ale The Rodney, in homage to the Australian AI researcher and roboticist Rodney Brooks, whose work resulted in the Roomba vacuum cleaner.

Joining Amazon

Always an advocate for Australia on the world stage, van den Hengel was keen to play a leading role in Amazon’s research push into the country. “It was a fantastic opportunity to start a new group in Australia for a company like Amazon.”

Typically, when academics transition to Amazon, they talk about the increase in pace from academia to industry. Van den Hengel bucks that trend.

“I was running a group with 140 people, trying to make enough money to pay them, keep the doors open, deliver on projects for tens of millions of dollars, doing PR, you name it,” he says. “Here, I’ve got about 25 world-class people with PhDs who work for me and 12 interns.”

Van den Hengel noted that Amazon is a results-focused environment. “At Amazon you are expected to deliver, but you do it with an engineering team and support systems all geared towards delivering customer benefit.”

So what is van den Hengel delivering on? A current project is applying visual inspection methods to help to make sure that Amazon customers get the best fresh produce possible.

I think the whole retail field is moving towards a better understanding of the nature of objects in the world and how humans relate to those objects, or products. And that’s something that computer vision is particularly well-placed to deliver.

“Visual inspection is a magnificent challenge and a core problem in computer vision,” he says,” and addressing it means we can make sure that when a customer receives a delivery of, say, tomatoes, they are as perfect as can be.”

Another key project involves using computer vision and ML to understand in a deeper way the hundreds of millions of items in the ever-changing Amazon catalogue. The catalogue has a trove of information, both in the word-based product descriptions and the images supplied by sellers.

“Making the most of the information contained in these two sources of information – which is essentially what humans do – is an interesting challenge, because it relies on the relationships between visual signals and symbols,” he explains, adding that cracking this challenge will help customers who are using Amazon search find the product that best matches their need “even if they’re not entirely sure how best to specify it themselves.”

Despite the considerable demands of managing a growing team, van den Hengel is determined to remain hands-on with his own research. “Amazon’s an innovative company, and really, truly innovating in a way that’s going to provide something of value to customers that nobody else can means that you need managers who deeply understand where the technology can go,” he says.

So where is the technology going?

“I think the whole retail field is moving towards a better understanding of the nature of objects in the world and how humans relate to those objects, or products,” he says. “And that’s something that computer vision is particularly well-placed to deliver.”

Source link


Please enter your comment!
Please enter your name here

Share post:


More like this

Nvidia: Chip giant posts record sales as boss sees AI ‘tipping point’

The chip giant's shares have soared by more...

How AI is helping the search for extraterrestrial life

Artificial intelligence software is being used to look...

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model...