account_balance Data Intensive Science

The GNA-G Data Intensive Sciences Working Group (DIS-WG) will undertake an in-depth consideration of the so-called Computing Models that encompass the workflow and operations of the programs mentioned, namely how the program processes, distributes and makes data available for analysis, as well as how the data resulting from analyses is propagated and shared.

The WG will work with the science programs’ data and computing management teams, to formulate, design and prototype the new network services and new modes of R&E network operations that coordinate the use of network, computing and storage resources distributed across a global footprint.

Apart from prototyping the pre-production and production systems, this WG will aim to serve the users and partner projects by helping to meet their needs for network resources as part of
their scientific workflows. The “users” can be seen from (at least) two points of view: groups working of behalf of the project to distribute and make the necessary datasets available for
analysis, while developing such mechanisms as data placement and caching, and individuals wanting to get access to and sometime move data as part of their analysis work.

Objectives:

  • Coordinating among the needs, methods and technologies of major data driven science programs;
  • Prototyping systems and technologies supporting workflows and associated network services;
  • Developing a global architecture for coordinated data distribution in science workflows; and
  • Supporting research data discovery.


Contact details:

Working Group Chairs

adjust Harvey Newman

Caltech

adjust Tom Lehman

ESnet

adjust Julio Ibarra

AmLight/FIU

Working Group Contributors

adjust LHCONE

adjust AmLight

adjust StarLight

adjust Pacific Wave

adjust Pacific Research Platform (PRP/NRP)

adjust ESnet

adjust Internet2

adjust GEANT

adjust Nordunet

adjust SURF

adjust KISTI/KREONet

adjust AARNet

GNA-G Leadership Team Liaison

adjust Harvey Newman

Caltech

Resources and more information

A major focus of the group is to address the growing demand for network-integrated workflows, comprehensive cross-institution data management, automation, resource management, and the move towards federated infrastructures encompassing networking, compute, and storage.

This working group is distinguished by its goals to share, deploy and develop tools and services that support science programs across a multidomain footprint starting with a global persistent testbed. Members of the WG will at the same time partner in joint developments of generally useful tools and systems that help operate and manage research and education networks with limited resources across national and regional boundaries.

A complementary mission of the DIS WG is to provide a forum and a venue to raise awareness and mutual understanding, and share experience and tools among the network community and the science communities, in the process of meeting the needs of both communities.

One driver of the immediate need for this WG is the exponential growth of network traffic, and the projected network capacity and distributed workflow requirements of major data intensive science programs in the coming years; needs which cannot be met through technology evolution and price/performance improvements alone within a constant budget. While this is especially true for the next phase of the Large Hadron Collider (LHC) program, the High Luminosity LHC (HL-LHC) where the experiments have called for terabit/sec links along intercontinental paths by approximately 2028, these considerations also apply to programs with similar needs such as the Vera Rubin Observatory, DUNE at LBNF, and the Square Kilometer Array Telescope.

Mission Goals of the Group

The GNA-G Data Intensive Sciences Working Group (DIS-WG) will undertake an in-depth consideration of the so-called Computing Models that encompass the workflow and operations of the programs mentioned, namely how the program processes, distributes and makes data available for analysis, as well as how the data resulting from analyses is propagated and shared. The WG will work with the science programs’ data and computing management teams, to formulate, design and prototype the new network services and new modes of R&E network operations that coordinate the use of network, computing and storage resources distributed across a global footprint.

Apart from prototyping the pre-production and production systems, this WG will aim to serve the users and partner projects by helping to meet their needs for network resources as part of their scientific workflows. The “users” can be seen from (at least) two points of view: groups working of behalf of the project to distribute and make the necessary datasets available for analysis, while developing such mechanisms as data placement and caching, and individuals wanting to get access to and sometime move data as part of their analysis work.

The mission of this WG includes a diverse set of themes, including 1) coordinating among the needs, methods and technologies of major data driven science programs 2) prototyping systems and technologies supporting workflows and associated network services 3) developing a global architecture for coordinated data distribution in science workflows, and 4) supporting research data discovery. Given the breadth of the mission, actionable milestones and deliverables will need to be addressed by subgroups and/or task forces working on specific work items, adapting to the availability of manpower and/or existing tools and services that will enable short term practical progress.

Among the partner projects, in addition to those that serve individual science programs, there are projects whose aim is to provide shared computing and storage resources, or networks, to multiple programs. In the former category notable examples are the Open Science Grid (OSG), and in the latter category we have LHCONE, AmLight, StarLight, Pacific Wave, Pacific Research Platform (PRP/NRP), as well as ESnet, Internet2, GEANT, Nordunet, SURFnet, KISTI/KREONet, Aarnet, etc.

The GNA-G, and this WG in particular, provide a natural context for this work, given the inter-regional extent of these programs, with participating facilities and sites located in several regions of the world, and the essential need for common services that meet the needs of the scientific community using a shared set of transoceanic, national and regional networks.

The envisaged program and mission of this WG will take advantage of current development projects and the ongoing work of such projects/activities as the AutoGOLE/SENSE WG, that provides the capability to set up, allocate and end-to-end network paths with bandwidth guarantees, and to coordinate the use of network resources with computing and storage resources at multiple sites. This program also will take advantage of ongoing work in other projects with similar aims such as AmLight, Virtual Dedicated Networks, NOTED, etc., and other paradigms such as network slicing, and will seek to provide mechanisms to interface to and/or mediate among regional and national developments as needed.

Deliverables of the Programmable Network WG

  1. Set up a group of data management and development POCs to the partner organizations;
  2. Develop roadmaps for the estimated requirements of the partner science programs, and a complementary roadmap of the affordable aggregate network capacity along the various routes that interconnect the partner’s sites. This implies engagement through the POCs to understand the requirements resulting from each program’s workflow, and technology tracking, projections and operational scenarios to match the affordable capacity to the requirements;
  3. Work with the AutoGOLE/SENSE WG to define and evolve a common set of services, and the interfaces to the data management software systems stacks of the partner projects and the services needed to support their workflow;
  4. Coordinate GNA-G Programmable Network efforts across and among the NSF IRNC, FABRIC, AutoGOLE and other testbeds to create an at-scale network testbed infrastructure for prototyping and development;
  5. Develop an Architecture and Proof of Concept software and demonstrations to help develop and validate the operational aspects and required parameters and performance of the common services and interfaces to the various science programs’ workflows;
  6. Work with the Telemetry WG to define and evolve the network monitoring services needed to support the partner organizations’ workflow;
  7. Build a software infrastructure to interface with partner organizations and projects:
    • Define interfaces/APIs to work with each of a starting list of partners data management systems, and the tools used for production dataset processing and distribution for analysis;
  8. Define and develop tools that allow partner organizations to allocate bandwidth along defined paths, within available limits, coexisting with best effort services;
  9. Define and develop mechanisms and tools that allow flows to be identified and associated with a series of “priority” activities of the major partners. Under constrained conditions provide functions that allow each partner to prioritize their allocations;
  10. Define and develop mechanisms and tools that allow fair sharing among multiple partners using the shared global testbed;
  11. Develop metrics, algorithms and services that seek to optimize operation of the testbed according to the metrics;
  12. Work with the partners to setup a process by which the methods and tools developed on the testbed are integrated into preproduction services supporting the workflows of the partners;
  13. Work to scale the prototypical and pre-production services to production, on an agreed upon timescale, set by the major milestones of the partner programs;

Partners