My main area of research is distributed computing. I research how to best support complex scientific applications on a variety of computational environments, including campus clusters, grids, and clouds. I have designed new algorithms for job scheduling, resource provisioning, and data storage optimization in the context of scientific workflows.
Since 2000, I have been conducting research in scientific workflows and have been leading the design and development of the Pegasus software that maps complex application workflows onto distributed resources. Pegasus is used by a broad community of researchers in astronomy, bioinformatics, earthquake science, gravitational-wave physics, limnology, and others.
I am also the Principle Investigator for the CI CoE pilot, which provides leadership, expertise, and active support to cyberinfrastructure practitioners at NSF Major Facilities and throughout the research ecosystem in order to enable ongoing evolution of our technologies, our practices, and our field, ensuring the integrity and effectiveness of the cyberinfrastructure upon which research and discovery depend.
In addition, I am interested in issues of distributed data management, high-level application monitoring, and resource provisioning in grids and clouds.
For the latest news, please check the Pegasus blog.
I am looking for a Postdoctoral Researcher to work at the intersection between Distributed Computing, Big Data and Machine Learning.
I am looking for 2 Programmer Analyst II for working on the development of technologies to support the execution of scientific workflows on distributed systems.
I am looking for good Ph.D. students that want to work in the area of distributed computing, with emphasis on scientific workflows, cloud computing, and data and resource management.