Hewlett Packard Enterprise is one of the largest HPC and enterprise computing vendors in the world. The company was created as a result of a splitting of the venerable Hewlett Packard company into two parts:  Hewlett Packard Enterprise (HPE) and HP Inc.

HPE is focused on servers, storage, networking, and consulting for business and HPC markets. HPE is one of the largest players in HPC with many systems on the Top500 list of the fastest supercomputers in the world. In terms of performance share, HPE tops the list with its systems in terms of total performance share.

While HPE and its predecessor HP have always played a large role in HPC, the company became even more prominent with the acquisition of HPC pioneer Silicon Graphics (SGI) in in 2016. But HPE’s biggest move in HPC happened in 2019 when they purchased supercomputing leader Cray. These acquisitions have immeasurably strengthened HPE’s advantages in the HPC market while fueling innovative new products and services.

HPE delivered the first and second exascale systems in the US and are working on several more.

For more information on HPE in HPC

2023 Winter Classic: HPE

2023 HPE Mentor Interview

In our most recent update, “Triumph and Tragedy with HPL/HPCG”, we detailed how our dozen 2023 Winter Classic Invitational cluster competition teams dealt with their Linpack/HPCG module, mentored by HPE.

In this episode of our incredibly popular 2023 Winter Classic Studio Update Show, we interview the mentors behind the event, the folks who readied the systems, trained the students, and fielded their questions during the weeklong challenge.

We want to shine some light on the mentor organizations who are critical to making this competition possible. It’s not an easy job, the mentors have to provide clusters for the teams, give them log ins to the boxes, teach them how to use the system, bring them up to speed on the operating environment, train them on the application(s), and, well, lots of stuff.

HPE did an exemplary job in this, their second year of mentoring students on Linpack and HPCG. They provided a Frontier-like configured set of training/practice clusters for the student to work out on and shepherded the teams through the entire process.

 

2023 Winter Classic: Triumph & Tragedy with HPL/HPCG

HPL. HPCG. Bookends. One will show you the best possible performance from your cluster while the other will show you the worst. Running and optimizing these two foundational HPC benchmarks was the task for the twelve 2023 Winter Classic Invitational Student Cluster Competition team.

The mentor organization, HPE in this case, provided the students with everything they needed to run and optimize the two benchmarks. This included training, access to virtual clusters, and answers to the questions that popped up during the one-week practice period. We’ll be interviewing the HPE mentor team to get a behind the scenes look at how this module went.

In this latest version of our Studio Update Show, Dan Olds and Addison Snell take you through the results and how they changed the leaderboard (there were big changes).

2022 Winter Classic: HPE

Justin Hotard, HPE's GM of HPC and AI, joins Dan and Addison on the show to break down the competition and discuss its importance. He also drops a bombshell by more than doubling the Brueckner Award Scholarships, adding $12,000 from HPE and $6,000 PERSONALLY to the scholarship fund. This is an amazing gesture and hugely appreciated!

HPE Mentors 2022 Winter Classic Field

As part of our continuing coverage of the 2022 Winter Classic Student Cluster Competition, we want to shine a light on the first mentor organization in the competition.

Hewlett Packard Enterprise really stepped up to the plate by teaching the twelve student teams how to use a HPE/Cray cluster plus how to run and optimize the LINPACK and HPCG benchmarks.

Just to refresh your memory, the Winter Classic competition is a marathon event that exclusively features Historically Black and Hispanic universities competing in a virtual cluster competition spanning eight weeks. In addition to working with HPE on HPL/HPCG in the first week of the competition, student teams will also work with NASA, Oak Ridge National Lab, and AWS in coming weeks.

The goal of the competition is that by the end of it, students will be able to say that they have worked on real-world supercomputers, running real-world applications and benchmarks, and that they know how to optimize them. This should help pave the way for them to work in HPC, which is a great thing.

HPE did one of the best mentoring jobs we've seen in the two years of this competition. Just about every team turned in results for both benchmarks, which doesn't often happen in these competitions. To highlight their contribution, Episode 4 of our increasingly popular "2022 Winter Classic Student Cluster Competition Studio Update Show" featured the HPE team and covers why they got involved with this competition, how they configured the systems, and the training they provided to the students.

 

2022 Winter Classic:  First Results are in!

The first results are in! Student teams turned in their HPL (Linpack) and HPCG benchmark results and we got ’em.

Twelve student teams spent last week under the tutelage of HPE learning about HPC and how to run (then optimize) HPL and HPCG. The results were outstanding. Nearly all the teams completed the task and some of their numbers were pro level in terms of, for example, Linpack efficiency.

You’ll have to watch (or fast forward) through the video below in order to get all of the results and details. But let me whet your appetite with some tidbits:

  • One of the Texas Tech teams took home the Linpack crown with a two-node score of 6,631 GFLOP/s. It was an extremely close battle. Two other teams were within two points of the leader.
  • The HPCG results were a great story. Tennessee State University took the win by either a highly skilled approach or luck. We’ll ask them when we interview them. Watch the video to see what they did to top the other teams by 30% and more.

Oh yeah, that video I keep referring to? Here’s the link to it: