1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
title: "Multiverse Computing Releases HyperNova 60B: Quantum-Compressed LLM Now Free on Hugging Face"
date: 2026-02-24
author: "Digital Frontier"
draft: false
categories: ["Technical"]
tags: ["model-compression", "open-weight", "llm", "hugging-face", "inference"]
description: "Multiverse Computing's HyperNova 60B 2602 compresses a 120B model to 32GB and is now free on Hugging Face."
summary: "Spanish startup Multiverse Computing released HyperNova 60B 2602, a quantum-inspired compressed model derived from OpenAI's gpt-oss-120B. At 32GB, it's roughly half the size of its parent model while claiming benchmark parity with Mistral Large 3, and it's free on Hugging Face."
article:
  type: "analysis"
technologies: ["HyperNova 60B", "CompactifAI", "Hugging Face", "OpenAI gpt-oss-120B", "Mistral Large 3"]
keywords: ["model compression", "HyperNova 60B", "Multiverse Computing", "CompactifAI", "quantum compression", "open weight LLM", "local inference", "60B model"]
---

Multiverse Computing, a Basque Country-based startup, released [HyperNova 60B 2602](https://huggingface.co/MultiverseComputingCAI/Hypernova-60B-2602) on Hugging Face today — a free, compressed 60B-parameter model derived from OpenAI's gpt-oss-120B. The compression is performed by CompactifAI, Multiverse's proprietary technology inspired by quantum computing tensor network methods.

At 32GB, HyperNova 60B is approximately half the size of its 120B parent model. Multiverse claims lower memory usage, lower latency, and near-parity accuracy. The 2602 update adds improved tool calling and agentic coding support — areas where inference cost reduction has direct operational impact.




## Compression Details

CompactifAI applies tensor decomposition techniques borrowed from quantum computing to reduce model weight counts while preserving performance. The result is a 60B-parameter model that fits in 32GB — within reach of a single high-VRAM consumer GPU or modest multi-GPU setup.

| Metric | gpt-oss-120B (source) | HyperNova 60B 2602 |
|---|---|---|
| Parameters | 120B | 60B |
| Size on disk | ~64GB (est.) | 32GB |
| Tool calling | Standard | Enhanced |
| Agentic coding | Standard | Enhanced |
| License | Proprietary | Free (Hugging Face) |

## Benchmark Claims

Multiverse claims HyperNova 60B 2602 outperforms Mistral Large 3 on unspecified benchmarks. No detailed benchmark tables or methodology have been published alongside the release — independent evaluation is pending. The company has indicated it plans to open-source additional compressed models throughout 2026.

## Context

Multiverse Computing raised a $215 million Series B in June 2025 and is reportedly in discussions for a €500 million round at a €1.5 billion+ valuation. Enterprise customers include Iberdrola, Bosch, and the Bank of Canada. The company positions itself as a European sovereign AI alternative, with backing from the Spanish Agency for Technological Transformation (SETT) and the Basque regional government.

The compressed-model space is increasingly relevant as organizations look to reduce inference costs without sacrificing capability. A free 60B model that fits in 32GB lowers the barrier for local deployment and experimentation — particularly for agentic workflows where per-token costs compound quickly.

## References

1. [HyperNova 60B 2602 on Hugging Face](https://huggingface.co/MultiverseComputingCAI/Hypernova-60B-2602)
2. [TechCrunch coverage](https://techcrunch.com/2026/02/24/spanish-soonicorn-multiverse-computing-releases-free-compressed-ai-model/)
3. [Multiverse Computing press release](https://multiversecomputing.com/resources/multiverse-computing-opens-full-access-to-hypernova-60b-2602-on-hugging-face)
4. [Multiverse $215M Series B (June 2025)](https://techcrunch.com/2025/06/12/multiverse-computing-raises-215m-for-tech-that-could-radically-slim-ai-costs/)

Configuration details reflect a production environment at time of writing. Implementation specifics vary based on tooling versions, platform updates, and organizational requirements. Validate approaches against current documentation before deployment.