Yanhong Annie Liu and Scott Stoller, professors in the Department of Computer Science at Stony Brook University, have been awarded $1.5 million from the National Science Foundation for their research project “From Clarity to Efficiency for Distributed Algorithms.” The funding is a four-year computing and communication foundations grant following a two-year exploratory research grant.
Liu has been doing research on algorithms and languages for more than 20 years, and is now focused on methods for developing the most efficient and reliable algorithms for distributed systems. Stoller’s research spans concurrent and distributed systems, computer security, and program analysis and verification.
Distributed systems — multiple computers communicating with each other by passing messages — are an important part of our everyday lives. All large companies from Amazon to Google to Walmart use distributed systems, which require algorithms (a set of steps to solve a problem) to coordinate the computers to make a service run smoothly. These algorithms are extremely complex, yet they must be highly efficient, reliable and verifiable. That is where “clarity to efficiency” becomes crucial; it is at the center of Liu’s years of research.
“In computer science, problem solving involves going from clear specifications for what to compute to efficient implementations for how to compute,” said Liu. “Unfortunately there is a conflict between clarity and efficiency: clear specifications usually correspond to straightforward implementations, not at all efficient, while efficient implementations are usually sophisticated, not at all clear.”
Liu’s research addresses the conflict between clarity and efficiency by providing a systematic method to go from clear specifications to efficient implementations. She and her collaborators are designing and implementing a new language, DistAlgo, specifically for more efficient development of distributed algorithms. DistAlgo enables complex distributed algorithms and systems to be more easily described, efficiently implemented and, at the same time, verified to ensure that these systems perform properly on the web.
Most people don’t think about how things work on the web, and they certainly don’t think about distributed algorithms. But these intricate processes are behind every online transaction you make. When you buy something from Amazon, for example, you enter your payment information; it is then replicated to multiple servers, as opposed to a single one that would lose your information in the event of a failure — network error, power outage, disk failure, etc. If multiple servers receive different information, then they must somehow achieve a consensus, agreeing on one correct result to ensure that you made a payment and your product can be shipped. Distributed algorithms coordinate the activity to deal with and recover from all these thorny scenarios, keeping the website running smoothly and efficiently.
“These algorithms are so important, but also so difficult. That’s why the research is very important,” said Liu. “We must get the algorithms right, in existing and new services, because we don’t want these services that we use every day on the web to go wrong.”