QUANTUM INTELLIGENCE - FROM GPU TO QPU?
- Osinto HQ
- Apr 17
- 11 min read
Updated: Apr 19
Quantum Processing Units or QPUs hold much computational promise, but will they supercede the GPU any time soon? We take a look a brief look at the evolution of data centre hardware to find out, before diving into a few examples of quantum computers already deployed in data centres around the world.
Computational evolution: GPU>CPU | SSD>HDD

What's made the most difference to the performance of your computer in recent years? Most likely one (or both) of the following:
Solid State Drives (SSDs) - no spinning disks = 2-5x faster read and write speeds vs old fashioned Hard Disk Drives (HDDs)
Graphics Processing Units (GPUs) - enjoying fancy graphics or running Large Language Models (LLMs) locally? Thank your GPU for that!
We've all enjoyed the benefits of consumer electronics with SSDs and GPUs for years, but the endless, unseen racks of data centre servers powering the cloud services we all enjoy? They remain, for the most part, stubbornly reliant on racks of general purpose compute hardware - old school HDDs spinning up and down, paired with the computational jack-of-all-trades - the Central Processing Unit (CPU). Inter-connect these general purpose servers with myriad switches and endless miles of ethernet cables and you have the (very over-simplified) makings of a cloud platform.
Multi-tenanting CPUs = good business
As CPUs have been developed with an ever greater numbers of processor cores it's become easier for Cloud Service Providers (CSPs) to rent one server out to many customers - so called multi-tenanting - which has proved to be very good business for tech hyperscalers like AWS, Google, Microsoft et al.
By scaling enormous storage fabrics (combining HDD storage and SSD caching) they've been able to offer a very low cost solution to any company looking to store data. Keeping data ingress fees near zero and egress fees (sometimes opaquely) high, CSPs have successfully engineered high-friction vendor lock-in at staggering scale.
Migrating your data and tech stack represents high levels of business risk. Knowing this the hyperscalers shower new customers with platform credit incentives and onboarding support - knowing that their churn rate is low and lifetime customer values (LCVs) are high.
But talk to a tech startup CTO and they'll also warn of the few intensely high margin services that can be the painful result of remaining hostage to a cloud provider. Run machine learning algorithms on more exotic (eg. GPU-accelerated) hardware and costs can and do run wildly out of control - so much so that cloud cost control is a core function within most tech teams today.
This is a combination of profit taking from the cloud providers and a generational change taking place in the architecture of data centres, driven by GPUs and demand for Artificial Intelligence (AI) applications.
GPU accelerated server clouds require c. 10x as much electrical power as racks of general purpose compute, liquid cooling and preferably adjacent SSD storage fabrics with similarly high appetites for power and high speed inter-connects.
That shift is necessitating different data centre designs, sometimes ill-suited to existing sites. It's such a fundamental shift that the likes of Amazon, Google and Meta have paused and rewritten CAPEX deployment plans in the last two years, remodelling demand and re-designing data centres.
Simply put data centre hosting your file storage might simply not have the power or cooling capacity to host the GPU-accelerated AI servers or SSD native storage fabrics that power AI workloads. And if they do the power in particular might be prohibitively expensive, and simply be passed on to you as the customer in the form of eye-wateringly high GPU/hr rental rates.
This mismatch of infrastructure to compute for generative AI applications is what made space in the market for pure-play GPU cloud specialists like Coreweave and Lambda Labs.
Why GPUs?
The GPU has come of age in the data centre - much to NVIDIA's delight - because they're particularly well suited to concurrently running complex mathematical computations such as matrix multiplication (the type of complex calculation that underpins generative AI models). The deep neural networks behind both the Large Language Models (LLMs) that power popular chatbots and the diffusion models used for image generation services rely heavily on such matrix operations.
Note here that there's surging demand from businesses to train and run these AI models, which is what's driving demand to put servers accelerated by GPUs into data centres worldwide - even though they're prodigiously expensive to both buy and run (thanks to a hunger for both power and cooling water) - vs more traditional general purpose compute racks.
QPU > GPU > CPU?
There's good reason for Cloud Service Providers (CSPs) to switch out hardware less frequently than you do your laptop or smartphone; these are expensive, depreciating assets. They need to make their owners money and that means sweating them for as long as possible - around five years is the norm.
Demand for CPU compute and HDD storage isn't going away, but it's no longer the high growth, high margin segment of the market. That's (with some caveats) the AI-driven boom in GPU & SSD cluster deployment.
But just as the GPU+SSD era of cloud computing kicks off in earnest the abstruse spectre of quantum computing lurks ominously on the horizon. Is this a technology that could decimate the forecast returns of even the most informed investors in GPU compute?
Error prone computers
Does anybody want an exotic new breed of computing device that's prone to errors, seems near impossible to scale and is arguably worse at almost any practically useful computational task than 'traditional' silicon chip based computers?
We won't get bogged down in the bewildering detail of competing quantum computing technologies in this post, but instead look briefly at what constitutes a Quantum Processing Unit - or QPU - so at to gauge the likelihood of this frontier technology gatecrashing NVIDIA's highly profitable GPU party.
We'll then look briefly at how some early quantum computers have already been deployed into data centres around the world, including alongside GPU-accelerated servers in innovative hybrid architectures.
Qubits - quantum bits - offer infinite possibility
The fundamental building block of today's traditional (i.e. silicon based) computers is the binary bit - a unit that is either a '1' or a '0', determined by the presence or absence of electrical current - essentially a switch that is either in an on or off position.
Quantum computers instead build upon quantum bits or 'qubits' - which are an abstraction used to express the quantum state of a particle. The state of the particle is not limited to a binary '0' or '1'.
One way to understand this is to use the analogy of a globe. A binary bit can only have one of two absolute values - a '1' or a '0' - equivalent to the north pole or south pole of the globe.
By contrast a qubit's value shown on the same globe would not be limited to the poles, but could be any position on the surface of the sphere. The latitudinal and longitudinal coordinates of this position equate to the precise quantum state of a given particle. The possible values are infinite for the qubit vs binary for the bit:

In simple terms this theoretically enables a quantum computer to consider many different possibilities simultaneously, at far greater speed than a traditional computing architecture.
Should functioning quantum computers hold to this promise then they could revolutionise cryptography, help financiers and traders allocate capital and resources or enable the design and discovery of complex new drugs and materials.
But let's not get ahead of ourselves. The problem at present is that quantum computers consist of far too few of these qubit building blocks to perform useful calculations. They are also rather expensive, can be extremely sensitive to environmental disturbance, are nearly all prone to errors and proving quite hard to scale up to useful sizes.
Hence in their current form most quantum computers are no better at almost any computational task than traditional, silicon semiconductor based computers.
A $45 billion white elephant?
Just under $45 billion [1] has been invested in quantum computing (and associated technologies) globally to date. Total revenues in 2024? Less than $1.5 billion [2].
Investment has notably come from both the public and private sectors - governments are throwing billions into quantum research - alongside venture capitalists. This hints at the potential geopolitical significance of a technology that could render all today's encryption moot. We'll be diving into the geopolitics of quantum computing in another post.
From qubits to Quantum Processing Units (QPUs)
There's not yet consensus as to the best way to build a quantum computer. A range of technology architectures compete, each with wildly different approaches to constructing so-called physical qubits. From super-cooled electrons to photons manipulated by arrays of laser light - the technologies are certainly exotic, novel and specialised.
However the quantum computer measures a subatomic particle (eg. an electron, ion or photon) the physical qubits within a quantum computer need to be scaled up to form logic qubits - units capable of remaining coherent long enough for computational logic to be performed (eg. when integrated into a circuit with logic gates).
A collection of such logic qubits forms a processing unit. These Quantum Processing Units or QPUs - analogous to more familiar computational processing units such as the CPUs and GPU - are the building blocks of quantum computers. The more qubits a quantum computer has - the more powerful it's considered to be.
Today the top performing (publicly disclosed) quantum computers have in the region of 1,000 to 10,000 physical qubits.
Quantum computers in data centres
Exotic, high performance computing architectures aren't new. There are data centres around the world that already specialise in so-called High Performance Computing (HPC).
Incorporating non-standard hardware into data centres at experimental scale is achievable - after all a data room is in essence just a climate controlled box supplied with electrical power and networking. At scale it might be practicable for quantum computers to fit into standard 19-inch rack mounting, but it's not essential.
What's more important is that the machines scale to a qubit size where they can perform useful computation, and do so with error rates low enough that the results can be trusted.
Quantum-Computing-as-a-Service - QCaaS
There are already deployments of quantum computers of various types within data centres and research centres around the world. Some enable researchers and companies to pay to remotely access them like any other cloud computing resource - so called Quantum-Computing-as-a-Service or QCaaS.
In an industry burning prodigious amounts of CAPEX that's hungry for early revenue, partnerships with data centre operators and Cloud Service Providers (CSPs) are seen as an appetising route to get money coming back into company bank accounts.
A few examples of 'Quantum-Computing-as-a-Service' or QCaaS deployments include:
IBM quantum data centre 🇩🇪 | Superconducting
A specialised 'quantum data centre' was inaugurated in Ehningen, Germany in October 2024, with two superconducting Quantum Eagle systems (estimated 127 qubits) and a Quantum Heron based system to be added at a later date (potentially a 156 qubit system) [4,5].
Quandela MosaiQ deployed in OVHcloud data centre 🇫🇷 | Photonic
Parisian startup Quandela first sold a photonic architecture MosaiQ quantum computer to European Cloud Service Provider (CSP) OVHcloud in March 2023, with deployment taking place in October. A further three machines were said to be scheduled for 2024 delivery. The company brand MosaiQ as "the first datacenter ready quantum computer", having established a production facility in Massy (south of Paris) in 2024:
The company have also been offering European researchers remote access to both a 6-qubit system since November 2024 and as of March 2025 a 12-qubit system based on their forthcoming Lucy [6,7] - said to be "in fabrication", with deployments targeted for Q4 2025:
Oxford Quantum Circuits deployed in the UK 🇬🇧
Colocation data centre company Cyxtera (now Centresqaure) announced in September 2022 that Oxford Quantum Circuits would deploy a superconducting machine to their LHR3 site in Reading, UK. They'd additionally made the machine accessible to their portfolio of customers [9].
They followed this in March 2023 by announcing deployments with one of the world's largest data centre operators - Equinix - at their TY11 site in Tokyo. Through their inter-connection fabric it too is widely available to third party customers:
GPU-QPU hybrids and control software
The next challenge is to make it easy for researchers to be able to actually use these early quantum computers - how do you actually interface with an exotic technology platform that may not conform to any of the hardware conventions underpinning the software tools you are accustomed to using?
If the only people able to operate a machine are those familiar with the peculiarities of how the hardware is put together, adoption isn't going to happen at any meaningful scale.
Similar barriers have long since existed in the realm of High Performance Computing (HPC) and one of the main drivers of CPU-accelerated compute adoption has been NVIDIA's software moat, decades in the making - Compute Unified Device Architecture or CUDA.
Competitor AMD's GPU hardware might be theoretically as performant as NVIDIA's in a GPU-accelerated server for example, but their software stack - ROCm - (although laudably open source) is still considered to be years behind NVIDIA's CUDA in functionality, and further still in terms of a conversant user base.
CUDA's quantum equivalent?
There's a race on to build the CUDA of quantum computing - needless to say NVIDIA are in the race - with CUDA-Q and the recently announced NVIDIA Accelerated Quantum Center (NVAQC).
We'll dive into quantum computing software in more detail at a later date but suffice to say that from broad, platform agnostic efforts like IBM's Qiskit to more architecture specific solutions such as Quandela's Perceval - few are keen to simply yield the space to NVIDIA.
NVIDIA's vision for using quantum computers as system accelerators themselves however is noteworthy, positioning their existing GPU accelerated DGX hardware as the backbone of such systems:
Announced partners for the research centre already include Quantinuum (trapped ion), QuEra (neutral atoms) and - perhaps most interestingly - Quantum Machines.
NVIDIA's next Mellanox-esque acquisition?
Israeli startup Quantum Machines specialise not in building quantum computers, but the bridges between quantum computers and the classical computers we all know how to use. The company's Hybrid Processing Units (HPUs) are designed to control quantum computers with high efficiency and are already tightly integrated with NVIDIA's DGX hardware, offering low latency communication from QPU to GPU/CPU.
We wouldn't be at all surprised to see NVIDIA acquire the company as they did Israel's Mellanox, folding their control hardware into the company's own offerings as they did with Infiniband networking. That said, the company have raised over $280 million in funding from investors including Intel Capital and Samsung NEXT, including a $170 million Series C announced in February 2025 [12], so they're no minnow!
Publicly announced customers include France's Alice & Bob (fresh off a monster €100 million fundraise of their own in January 2025) [11] and Australia's Diraq, who are pursuing an approach that aims to build quantum computing hardware within existing semiconductor fabrication facilities.
A step further towards specialised, 'data centre ready' quantum computing has been taken by the UK's Orca Computing. Their photonic 'PT' systems are built specifically for machine learning applications, and can be programmed directly in PyTorch - already widely used by AI researchers and practitioners the world over:
Sources
State of the Global Quantum Industry Report 2025 [The Quantum Consortium, March 2025]
Global Quantum Industry Revenue Topped $1.45 Billion in 2024 [The Quantum Consortium, March 2025]
What Is a QPU? [NVIDIA, July 2022]
First IBM Quantum Data Center in Europe Opens; Will Include IBM's Most Performant Quantum Systems [IBM, October 2024]
IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage [IBM, November 2024]
First Quandela Quantum computer delivered and installed in OVHcloud datacenter [Quandela, October 2023]
EuroQCS-France: remote access to a 12-qubit Quandela system is now available for European users! [Quandela, March 2025]
MosaiQ: The first datacenter ready quantum computer [Quandela, Undated]
OQC Partners with Cyxtera to Improve Accessibility to Quantum Computers [Oxford Quantum Circuits, September 2022]
Oxford Quantum Circuits Installing Quantum Computer in Equinix IBX® Data Center With Plans To Open Access to Businesses Globally [Equinix, March 2023]
ALICE & BOB CLOSES €100M SERIES B LED BY FUTURE FRENCH CHAMPIONS (FFC), AVP AND BPIFRANCE TO ADVANCE TOWARDS A USEFUL QUANTUM COMPUTER [Alice & Bob, January 2025]
Quantum Machines Raises $170M as Its Customer Base Exceeds 50% of Companies Developing Quantum Computers [Quantum Machines, February 2025]
Comments