Skip Navigation Bar
Stanley N. Cohen Papers 1948-2016
search terms in context | full text File Size: 788 K bytes | Add this to my bookbag



Biographical/Historical Note

By the early 1980s, U.S. Government leadership in high-performance computing was being challenged from abroad, especially by Japan. In the United States, seemingly insatiable demand on supercomputing resources came both from users at Federal labs conducting research to support agency missions and from researchers at U.S. universities conducting a wide-range of investigations, particularly in science and engineering, often for Federal agencies. DOE, NASA, NIST, NOAA, NSA, and NSF (all would become HPCC agencies), and U.S. universities and corporations were early purchasers of what were usually general-purpose supercomputing systems that were expensive to buy, house, and operate. In 1985 NSF funded the establishment of five supercomputer centers to serve the university community nationwide (rather than individual campuses), and States and corporations often helped procure the systems at those centers. NIH-funded university-based researchers often computed at the NSF-funded Pittsburgh Supercomputer Center. DOE's ESnet for the energy sciences, NASA Science Internet, and NSFNET were agency-funded coast-to-coast networks that connected their researchers to their supercomputer centers. R&D in computing, networking, and applications was often conducted in partnerships that included Federal agencies, U.S. universities, and U.S. industry.

Officially launched by the High-Performance Computing Act of 1991 (Public Law 102-194), the U.S. High Performance Computing and Communications (HPCC) Program was and is (with different names along the way) a coordinated multi-agency Federal R&D program. Its purpose "is to help ensure the continued leadership of the United States in high-performance computing and its applications." The Global Change Research Program, established by the Global Change Research Act of 1990 (P.L. 101-606), is the only other program of that type required by law.

The HPC Act was the culmination of efforts by Federal science agencies, research universities, computing systems manufacturers, telecommunications providers, the library community, and others. Those efforts included workshops, reports, HPCC activities within the Executive Branch of the U.S. Government, bills introduced in the U.S. Congress, and hearings and reports on those bills. For example, (D)ARPA, DOE, NASA, and NSF documented their growing need for "supercomputers" to tackle "Grand Challenge" science and engineering problems to address their agency missions, and for high-capacity, high-speed networks to connect the researchers that they funded to supercomputer centers. Reports often recommended improvements in procurement, access, design, capability, and algorithms and software, more trained personnel, and Federal coordination. Albert Gore Jr. (D-TN), a member of the U.S. House of Representatives from 1997 to 1985 and a U.S. Senator from 1985 to 1993, led the efforts that culminated in the passage of the Act, beginning with his Supercomputer Network Study Act of 1986 (P.L. 99-383). He introduced high-performance computing bills and held hearings on those bills in the 100th Congress (January 3, 1987-January 3, 1989), the 101st (1989-1991), and the 102nd (1991-1993).

The purpose of the HPC Act "is to help ensure the continued leadership of the United States in high-performance computing and its applications by [1] "expanding Federal support for research, development, and application of high-performance computing" in areas including a National Research and Education Network, information infrastructure including databases and access to supercomputers, software technology and tools, applications software, computing systems and subsystems, Grand Challenges, and basic research; promoting "greater collaboration among government, Federal laboratories, industry, high-performance computing centers, and universities"; and expanding "the number of researchers, educators, and students with training in" HPC and access to HPC resources and [2] "improving the interagency planning and coordination" of this Federal R&D "and maximizing the effectiveness of the Federal Government's high-performance computing efforts."

The Act states that the President of the United States "shall implement a National High-Performance Computing Program" and that the Director of the Office of Science and Technology Policy (OSTP) in the Executive Office of the President shall "provide for interagency coordination of the Program", "consult with academic, State, industry, and other appropriate groups conducting research on and using high-performance computing", and submit an annual report to Congress.

To implement its requirements, in September 1992 OSTP established the HPCC Program, a HPCCIT (adding Information Technology) Subcommittee, and the National Coordination Office (NCO) for HPCC, naming NLM Director Dr. Donald A.B. Lindberg as the first NCO Director. Lindberg served as NCO Director until March 1995. He served as NLM Director from 1984 until retiring in 2015. The NCO was housed at NLM from September 1992 until it moved to NSF space in December 1995. The HPCCIT Subcommittee had been established in 1990 within the Executive Branch and reported to OSTP/FCCSET/CPMES, and NLM had been a member since its earliest days. After the NSTC succeeded the FCCSET in 1993, the HPCCIT reported to the NSTC Committee on Technology. President Bill Clinton and Vice President Gore strongly supported the HPCC Program during their two terms in office (1993-2001).

The eight original HPCC agencies had the following responsibilities:
  • DARPA:R&D in fiber optic networking technologies (begun in 1969 with the invention of the TCP/IP protocol used in ARPAnet and then the Internet)
  • NSF: Computing and networking infrastructure for university-based researchers in all science and engineering disciplines, including funding the operation of NSFNET and five supercomputer centers
  • DOE: R&D on and evaluation of HPCC systems, computational research with emphasis on energy applications, computational science education; fund the operation of ESnet and supercomputer centers at DOE labs
  • NASA: HPC R&D in computational science with emphasis on aerospace sciences, earth and space sciences, and remote exploration and experimentation; operate supercomputer center; fund NASA network operation
  • NLM: Medical applications, medical computation, network access for academic medical centers, training and career development
  • NIST: Measurement R&D for HPC systems and networks, interoperability standards and guidelines
  • EPA: Research to advance and disseminate computational techniques and software tools for ecosystem, atmospheric chemistry, and atmospheric dynamics models
  • NOAA: R&D in weather prediction and ocean sciences, particularly forecast models, computational fluid dynamics, support for agency missions; operate supercomputer centers

From FY 1992 through FY 1996 the HPCC Program was organized into five components:
  • HPCS: High Performance Computing Systems
  • NREN: National Research and Education Network
  • ASTA: Advanced Software Technology and Algorithms
  • BRHR: Basic Research and Human Resources
  • IITA: Information Infrastructure Technology and Applications, added in FY 1994

By the end of FY 1996 the Program had achieved its two quantitative goals by demonstrating (a) teraflops (trillions of floating point operations per second) computing speed technologies and (b) gigabit (billions of bits per second) networking.

By design the Program is balanced across its components and of sufficient scope to meet Program goals, build on agency strengths, accelerate technology transfer, and overcome barriers, including cost barriers.

With different technologies evolving at different speeds, beginning in FY 1997 the Program was organized into Program Component Areas, each with its own goals and schedules:
  • HECC: High End Computing and Computation, which in FY 2000 split in two -- High End Computing (HEC) Infrastructure and Applications (HEC I&A); HEC R&D
  • LSN: Large Scale Networking
  • HCS: High Confidence Systems, which in FY 2000 became High Confidence Software and Systems (HCSS)
  • HuCS: Human Centered Systems, which in FY 2000 became Human Computer Interaction and Information Management (HCI&IM)
  • ETHR: Education, Training, and Human Resources, which in FY 2000 became Social, Economic, and Workforce Implications of IT (SEW)
  • SDP: Software Design and Productivity, added in FY 2000
  • CSIA: Cyber Security & Information Assurance, added in FY 2007

In FY 2016, the successor Networking and Information Technology Research and Development (NITRD) Program implemented an amended HPC Act that added responsibilities. There were 14 NITRD agencies – NSF, DoD Service research organizations, DOE Office of Science et al, NIH, DARPA, NIST, NASA, Department of Homeland Security, NOAA, DOE/NNSA, HHS/AHRQ, EPA, National Institute of Justice, and NARA. The Program reorganized again effective FY 2017.

A founding HPCC agency, NASA led the coordination of HPCC R&D in systems software for scalable parallel systems and in demonstrating teraflops performance on those systems. NASA's own software focus was on Grand Challenge applications in computational aerosciences, computational Earth and space sciences, and remote exploration for space flight. NASA supercomputing centers at Goddard Space Flight Center (GSFC) in Maryland and Ames Research Center (ARC) in California housed systems that were scaled up over time to support sustained teraflops performance on those applications. Through its innovations in performance-based award management, NASA accelerated productive use of experimental HPC systems by Grand Challenge research teams. NASA funded high-speed networks that connected those systems to each other and to Grand Challenge researchers in universities, industrial research centers, and government labs. NASA and NSF teamed to evaluate commercial parallel scalable HPC systems.

By assembling the first all-commodity Linux cluster in 1994, GSFC created a gigaflops workstation that was a less expensive alternative to proprietary hardware/software systems. This approach, called Beowulf, was soon adopted worldwide for sizes ranging to the most powerful.

NASA selected and funded the open-source Earth System Modeling Framework to reduce the software complexity and increase the interoperability of Federally-funded weather and climate modeling applications. The ESMF is now used worldwide for this purpose.

NASA participated in HPCC/IITA through its R&D in digital library technologies and remote sensing databases.

The Coalition for Academic Supercomputing Centers (CASC) was founded in 1989 to "further the development of a national infrastructure of large-scale computing and appropriate network resources" by the directors and other senior personnel at supercomputer center at U.S. universities. It began with 11 supercomputer centers in 10 states.

CASC later became the Coalition for Academic Scientific Computation, reflecting its expansion beyond "big iron" vector supercomputers to include parallel systems and then clusters, beyond supercomputer centers to include university-funded research programs, and to include Federal centers. As of 2017 CASC has 86 member organizations. Researchers – and their students – at CASC centers are funded by HPCC agencies including DARPA, DOE, NASA, NIH, and NSF. Four Federal labs with ties to nearby universities – Argonne with U. Illinois at Chicago, NERSC with UC-Berkeley, PNNL with U. Washington, and ORNL with U. Tennessee at Knoxville – have been CASC members. CASC members have over the years developed relationships with HPCC agencies; an example is NOAA with UT-Austin in hurricane projections including Hurricane Katrina in 2005. Representatives from HPCC agencies, especially NSF, DOE, NIH (especially NLM), DoD (especially DARPA), NASA, and OSTP have regularly spoken at the semi-annual CASC meetings.

Acronyms in this Biography/Historical Note are spelled out in the Access and Use section.