People of ACM - Jim Hendler

August 9, 2018

You are credited with being an originator of the semantic web, an initiative to extend the World Wide Web to include machine-readable markup, rather than solely focusing on human presentation (as traditional World Wide Web languages do). What advances have helped the semantic web develop in the last 10 years?

In his original design of the World Wide Web, ACM A.M. Turing Award winner Sir Tim Berners-Lee pointed out that his vision of the Web from the very beginning focused on the relationships between entities, not just the display thereof. For example, if I have a link to a photo and the words on my page read, “This is a picture of my daughter dressed for her high school prom,” then a human reading it would know the relationship between me and the person in the picture (parent-child), would know about how old the child was, etc. To a computer, however, there’s just a link, unless it could understand natural language and lots of stuff about the world.

While AI has made great strides in the past few years, computers still are nowhere near humans in understanding these human relationships. Similar problems come up in e-business (which picture on the page goes with which price), social networks, photosharing sites, etc. In the past decade, there has been significant progress made in standardizing how to include more of this information as machine-readable web markup, and how this can be used in e-commerce, advanced search, and many other areas. “Knowledge graphs,” which are essentially the labeled links of these kind of relations, have become an entire field in their own right, with work going on around the world in both research and application settings exploring how this information is best collected and used.

Although semantic web technologies are now being deployed by large organizations such as the US Department of Defense, they are still not pervasive. What are the remaining challenges of making the semantic web universally adopted, and when might this be a reality?

This question, which is one I am often asked, shows a common misunderstanding. In fact, the semantic web is very heavily used and nearly pervasive in most large web applications and companies. It surprises many people to learn that semantic markup, in the form of simple ontologies (machine-readable vocabularies) designed to improve search are now found on a very large percentage of the crawl of major web companies. This markup, known as schema.org, is now a basic aspect of web search, and is heavily used by Google, Bing and many others. This and other semantic techniques are also used by Facebook, Amazon, Baidu, Flipkart and many others around the world as an important technology for their web platforms. Wikipedia is available in a semantic form, called DBPedia, which has been the basis of many projects both academic and applied, and many large libraries, museums, media companies and governments around the world make their holdings available as “Linked Open Data,” which is another form of semantic web technology in wide use. Many prominent scientific data sharing projects also make their metadata available through semantic web standards.

That said, many things in the original vision have need to be continually updated and improved for the new way the web is used—mobile platforms, messaging based systems, voice agents—as well as for the applications that include using AI on big data collected from the web. So the reality is here today, but there is also plenty of research left to be done.

In the book you recently co-authored with Alice Mulvehill, Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity, you discussed the future of cooperative work between humans and computers. How will future human-computer interaction be different than it is today?

The phrase “social machines” has been used in a number of ways; some people use it just to describe large networks like twitter, others to describe crowdsourcing, online games, etc., and some to talk about how AI is causing computers to be increasingly present in our social interactions, especially when coupled with mobile phones and home devices like Alexa and Google Home. In our book we address all of these, but we particularly focus on helping to raise awareness of both the power of modern AI and its limitations. These limitations make it so that, at least in the foreseeable future, systems that combine humans and computers are likely to be more powerful than computers alone. In the book we have a chapter outlining a future encounter with the medical system: the computer may be able to find relevant literature, or determine potential genetic tests, that would be important to a patient’s condition. However, a human doctor, with their much better understanding of human relationships, might be better able to determine when to apply that knowledge in the case of a particular patient. Similarly, when we look at future full of autonomous cars, we’re still assuming we, the humans, will be the ones who tell them where we want to go!

In short, AI will be a disruptive technology to society over the next decades. It will change the future of work in ways we cannot yet predict. We know there will be segments of the population put out of work, but we also know that it can be used to improve health care, raise the quality of professional services, and even to help scientists address the major challenges facing society. Our book was aimed at helping people who are not experts to better understand what the tradeoffs in the future may look like as this “double-edged sword” of AI increasingly enters our previously “humans-only” world.

You were recently appointed Chair of the ACM US Technology Policy Committee. What brought you into working on policy, and what unique role do you think the Committee can play in informing and/or shaping public policy?

Actually, what got me interested in policy is a natural extension of my work over the past 25 years. I’ve worked at DARPA and other agencies, not just developing the AI technologies described above, but helping policy makers understand their appropriate usage. I worked as an advisor to the White House helping to create Data.gov, the US open data sharing platform, and met with many governments around the world explaining the benefits of open data. I’ve also served on committees or been an advisor for US Department of Defense, NASA, the Department of Energy, the Department of Homeland Security, intelligence agencies, New York State and various policy-related non-governmental organizations on issues relating to the appropriate use of AI, web privacy, big data analytics, cybersecurity, net neutrality, and other such policy-related matters.

The more I have been involved in this work, the more I have realized that there is a pressing need for technical knowledge by people working for government agencies, by members of Congress and their staffs, and by judges and others in the legal system. Because their work can be contentious, especially in today’s highly-charged political climate, its been clear to me for some time now that understandable technological information, presented by trusted experts in a neutral and apolitical manner, is an absolutely critical prerequisite to good government and sound public policy.

ACM’s Technology Policy Council (TPC) will be able to play precisely that role for policy makers around the world on a host of both currently pressing and newly emerging technology policy issues. Currently, there are tremendous policy implications of computing technology for voting, privacy, cybersecurity, AI and algorithmic accountability, and many other legal issues relating to computing and the governance of computers and networks. The are also emerging technologies such as blockchain and quantum computing that are raising serious policy concerns.

ACM’s US Technology Policy Committee (USTPC) is able to reach out to our current membership and beyond to the larger ACM community to provide policy makers with exactly the kind of timely, understandable, authorative and trustworthy guidance that they need to address these very tough current and future challenges relating to computing and the impact of computer technologies on society. I'm very pleased and proud to have been picked to chair this group and to serve on ACM’s TPC. This is a terrific resource and I look forward to helping to make ACM a trusted, "must-consult" organization for policy makers and their staffs thoughout the US government and, through the ACM TPC, to interact with governments around the world.

James A. “Jim” Hendler is the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Institute (RPI). He also serves as the Director of the joint Rensselaer-IBM Health Empowerment by Analytics, Learning and Semantics (HEALS) project. Hendler is the author of over 400 publications in areas including the semantic web, artificial intelligence, and agent-based computing.

Among his honors, he was named an ACM Fellow for contributions to artificial intelligence and the semantic web. In July, Hendler was appointed to a two-year term as Chair of ACM’s US Technology Policy Committee.