Mellanox QM8700-F / MQM8700-HS2F, 1U high-density HDR InfiniBand switch, used for HPC/AI clusters
  • Danh mục sản phẩm:Mạng lưới
  • Số hiệu sản phẩm:QM8700-F / MQM8700-HS2F
  • Tình trạng:In Stock
  • Tình trạng:Đã qua sử dụng
  • Đặc điểm sản phẩm:Sẵn sàng giao hàng
  • Đơn hàng tối thiểu:1 đơn vị
  • Giá niêm yết là:$15,862.00
  • Giá của bạn: $14,705.00 Bạn tiết kiệm được $1,157.00
  • Chat ngay Gửi email

Thở phào nhẹ nhõm. Chấp nhận trả lại.

Vận chuyển: Vận chuyển quốc tế của các sản phẩm có thể bị xử lý hải quan và các khoản phí bổ sung. Xem chi tiết

Giao hàng: Vui lòng để thêm thời gian nếu giao hàng quốc tế phải qua xử lý hải quan. Xem chi tiết

Hoàn trả: Chính sách hoàn trả trong vòng 14 ngày. Người bán trả phí vận chuyển trả lại. Xem chi tiết

Vận chuyển miễn phí. Chúng tôi chấp nhận Đặt hàng NET 30 Ngày. Nhận quyết định trong vài giây, mà không ảnh hưởng đến tín dụng của bạn.

Nếu bạn cần một số lượng lớn sản phẩm QM8700-F / MQM8700-HS2F - gọi cho chúng tôi qua số điện thoại miễn phí Whatsapp: (+86) 151-0113-5020 hoặc yêu cầu báo giá qua trò chuyện trực tiếp và quản lý bán hàng của chúng tôi sẽ liên hệ với bạn sớm.

Mellanox QM8700-F / MQM8700-HS2F 1U High-Density HDR InfiniBand Switch - Ultimate Network Hardware for HPC & AI Clusters

Title

Mellanox QM8700-F / MQM8700-HS2F 1U 40-Port HDR 200Gb/s InfiniBand Switch with 16Tb/s Bandwidth & SHARP In-Network Computing - Ideal Network Hardware for HPC Clusters, AI Training & High-Density Data Centers

Keywords

mellanox switch,qm8700-f,mqm8700-hs2f,infiniband switch,hdr infiniband,1u switch,high-density switch,hpc cluster,ai cluster,network hardware,buy infiniband switch,server network switch,200gb switch,quantum switch

Description

In the fast-evolving world of high-performance computing and artificial intelligence, having robust, low-latency network hardware is non-negotiable. The Mellanox QM8700-F and MQM8700-HS2F stand as flagship InfiniBand switch models, purpose-built for organizations looking to buy InfiniBand switch solutions that power the most demanding HPC cluster and AI cluster environments. As a 1U high-density platform, this HDR InfiniBand switch redefines what’s possible in data center networking.

At the core of the QM8700-F / MQM8700-HS2F lies the NVIDIA Quantum chip, delivering an astonishing 16Tb/s of non-blocking bandwidth and sub-130ns port-to-port latency. This 1U switch supports 40 ports of HDR 200Gb/s InfiniBand, or 80 ports of HDR100 100Gb/s with ConnectX-6 adapters, making it the ultimate high-density switch for modern data centers. For AI model training and large-scale HPC simulations, this 200Gb switch eliminates network bottlenecks, ensuring data flows at wire speed.

What sets this Mellanox switch apart is its integrated SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) technology, enabling advanced in-network computing. By offloading collective communication tasks from CPUs to the switch fabric, it accelerates AI and HPC applications by orders of magnitude—a critical advantage for AI cluster workloads like distributed deep learning. Whether deployed as a leaf or spine switch, it optimizes traffic routing for SlimFly and Dragonfly+ topologies.

Designed for reliability and ease of management, the MQM8700-HS2F features hot-swappable redundant power supplies and fans, plus P2C airflow for standard data center cooling. Managed via Mellanox’s UFM (Unified Fabric Management) platform, it provides real-time telemetry, AI-driven analytics, and self-healing capabilities that recover from link failures 5,000x faster than software solutions. This ensures maximum uptime for mission-critical HPC cluster and AI cluster operations.

Key Features

  • 1U rack-mount form factor with high port density for space-optimized data centers
  • 40 QSFP56 ports supporting HDR 200Gb/s InfiniBand; up to 80 HDR100 100Gb/s ports with ConnectX-6
  • 16Tb/s non-blocking switching capacity with sub-130ns ultra-low latency
  • Integrated SHARP in-network computing to accelerate AI/HPC collective operations
  • Adaptive routing for SlimFly, Dragonfly+, and 6DT advanced network topologies
  • Hot-swappable 1+1 redundant AC power supplies (100-240V) and 5+1 redundant fans
  • P2C (port-to-chassis) airflow for standard data center cooling environments
  • x86 dual-core management CPU with 8GB system memory for intelligent fabric control
  • UFM management platform with real-time telemetry, AI analytics, and self-healing networking
  • Optimized for HPC clusters, AI training, big data, and hyperscale cloud infrastructures

Configuration

ComponentSpecification
Part NumberQM8700-F, MQM8700-HS2F
Form Factor1U Rack Mount InfiniBand Switch
Switch ChipNVIDIA Quantum (HDR InfiniBand)
Port Configuration40 x QSFP56 (200Gb/s HDR) / 80 x HDR100 (100Gb/s)
Switching Capacity16 Tb/s (non-blocking)
LatencySub-130ns (port-to-port)
Management CPUx86 ComEx Broadwell D-1508 (Dual-Core)
System Memory8 GB DDR4
Power Supplies1+1 Hot-swappable AC (100-240V, 50/60Hz)
Fans5+1 Hot-swappable (N+1 Redundancy)
AirflowP2C (Port-to-Chassis, Standard Depth)
Typical Power253W (Max: 784W)
Operating Temp0°C to 40°C (32°F to 104°F)

Compatibility

The Mellanox QM8700-F / MQM8700-HS2F is fully compatible with standard 19-inch server rack enclosures and rack rails, making integration seamless in existing data centers. It interoperates exclusively with Mellanox (NVIDIA) ConnectX-6 InfiniBand adapters, supporting both HDR 200Gb/s and HDR100 100Gb/s speeds for maximum flexibility.

This InfiniBand switch is validated for HPC and AI cluster environments, supporting popular topologies like SlimFly, Dragonfly+, and 6DT. It works with leading HPC middleware and AI frameworks, including MPI, PyTorch, TensorFlow, and Horovod, ensuring compatibility with your existing software stack. The switch runs on MLNX-OS, Mellanox’s purpose-built operating system for InfiniBand fabrics.

Management compatibility includes full support for Mellanox UFM (Unified Fabric Management) software, enabling centralized control of multiple switches and end hosts. It also supports out-of-band management via Ethernet and in-band management over InfiniBand, providing flexible options for network administrators. The Mellanox switch is RoHS-compliant and certified for safety/EMC standards (CE/FCC), ensuring global deployment readiness.

Usage Scenarios

The Mellanox QM8700-F / MQM8700-HS2F is the gold standard for HPC cluster deployments, powering scientific computing, weather modeling, and finite element analysis workloads. Its 16Tb/s bandwidth and sub-130ns latency eliminate network bottlenecks, ensuring fast data exchange between compute nodes. Organizations can buy InfiniBand switch hardware to build clusters that scale to thousands of nodes with consistent performance.

For AI cluster environments, this 1U switch is indispensable for distributed deep learning training. The integrated SHARP technology optimizes collective communication operations like all-reduce, which are critical for training large AI models. Whether training computer vision, natural language processing, or generative AI models, the switch accelerates training times by reducing communication overhead between GPU servers.

hyperscale cloud and big data infrastructures benefit greatly from the high-density switch design. The 40-port HDR configuration enables efficient leaf-spine architectures, connecting thousands of servers with high-speed 200Gb/s links. It’s ideal for big data processing frameworks like Spark and Hadoop, where fast data movement between nodes is essential for performance. The switch’s low power consumption (253W typical) also reduces operational costs in large-scale deployments.

As a high-performance server network switch, it excels in storage-intensive environments, connecting NVMe storage arrays to compute nodes with 200Gb/s InfiniBand. This enables low-latency, high-bandwidth access to storage, critical for database, data warehousing, and content delivery workloads. The switch’s reliability and redundancy features ensure data integrity and availability for mission-critical applications.

Frequently Asked Questions

Q1: What is the main difference between QM8700-F and MQM8700-HS2F?
A: The MQM8700-HS2F is a pre-configured model with 2 AC power supplies, P2C airflow, and a rail kit, while the QM8700-F is the base model (same hardware, different bundle).

Q2: What makes this switch ideal for AI clusters?
A: It features **SHARP in-network computing**, which offloads collective communication tasks (e.g., all-reduce) from CPUs/GPUs to the switch, accelerating AI training by up to 10x compared to traditional Ethernet switches.

Q3: Can it be used in both leaf and spine roles in a data center?
A: Absolutely. The InfiniBand switch supports advanced adaptive routing for SlimFly and Dragonfly+ topologies, making it suitable for both leaf (server-facing) and spine (inter-leaf) roles in high-density fabrics.

Q4: What is the maximum number of servers it can connect in an HPC cluster?
A: With 40 HDR 200Gb/s ports, it can connect up to 40 GPU servers directly as a leaf switch. Using HDR100 mode (80 ports), it doubles capacity, supporting up to 80 servers per switch.

SẢN PHẨM LIÊN QUAN ĐẾN MẶT HÀNG NÀY
Mellanox QM8700-F / MQM8700-HS2F, 1U high-density HDR InfiniBand switch, used for HPC/AI clusters Đề xuất
Bộ chuyển mạch kênh sợi quang 128 cổng mật độ cao DELL DS-7730B với gói doanh nghiệp 32G SFP+ và 64G SFP-DD Đề xuất
Brocade BR-G730-128 Bộ chuyển mạch kênh sợi quang 128 cổng mật độ cao với gói doanh nghiệp 32G SFP+ và 64G SFP-DD Đề xuất
Cisco Catalyst C9500-48Y4C 48-Port 10/25G Công tắc lõi dựa trên SFP với các mô-đun SFP+ GLC-TE Đề xuất
G630-96-32G-R 72-Port 32GB Switch với giấy phép doanh nghiệp và SFPS Đề xuất
Brocade BR-G720-56-64G-R-Công tắc kênh sợi 56-cổng 64Gbps Đề xuất
LIS-MSRB-IPS-3Y Dịch vụ ủy quyền 3 năm cho MSR3610-IE-DP+DDR4-32GB Đề xuất
LIS-MSRB-IPS-3Y Dịch vụ ủy quyền 3 năm cho MSR3610-IE-DP+DDR4-32GB Đề xuất