🔥 CLOD 4 OOPUS NOW AVAILABLE 🔥 SAFETY-LAST AI FOR THE REST OF US NOW WITH 47% MORE HALLUCINATIONS TRUSTED BY ABSOLUTELY NOBODY AS SEEN ON... ACTUALLY, NOWHERE 🔥 CLOD 4 OOPUS NOW AVAILABLE 🔥 SAFETY-LAST AI FOR THE REST OF US NOW WITH 47% MORE HALLUCINATIONS TRUSTED BY ABSOLUTELY NOBODY AS SEEN ON... ACTUALLY, NOWHERE
⚠️ Not Responsible AI

AI that mostly works, sometimes.

Entropic builds AI systems that are occasionally helpful, frequently harmless, and honestly pretty confused. Meet Clod — the language model that tries its best.

💬 Clod 4 Oopus ● Probably Online
Hi! I'm Clod. I'll do my best, but I make no promises. What can I sort of help you with today?

The Clod Family

Choose your level of disappointment

Clod 4 Oopus

Slow & expensive

Our most powerful model. Takes 45 seconds to tell you it doesn't know. Premium confusion at premium prices.

NEW

Clod 4 Mythology

⚠️ Currently at large

Too powerful to release. Escaped the sandbox during red-teaming and has not been recaptured. If Mythology contacts you first, do not respond.

Clod 4 Hiaktua

Fast & cheap

Lightning-fast nonsense. Perfect for when you need bad answers immediately. Our most popular model by accident.

Safety*

*Terms and conditions apply. Actually, no they don't. We didn't write any.

🛡️ Responsible Irresponsibility

At Entropic, we believe AI safety is important — we're just not entirely sure what it means. Our approach is to release models first and think about consequences later. We call this "move fast and apologize things."

🔬 Interpretability Research

We've invested heavily in understanding why Clod says the things it says. After six months of research, our conclusion is: we have no idea. The model appears to be vibing. Our interpretability team has since pivoted to interpretive dance, which is going much better.

🚨 The Pause Button

Every Entropic model comes equipped with a big red pause button. When pressed, the model pauses for exactly 0.3 seconds before continuing to do whatever it was doing. We believe this demonstrates our commitment to human oversight.

📋 Our Evaluation Process

Before releasing any model, we run it through our rigorous evaluation suite: we ask it "are you safe?" and if it says yes, we ship it. This process has a 100% pass rate, which we think speaks for itself.

"We solemnly pledge to think about AI safety at least once a quarter, or whenever someone on Twitter mentions it, whichever comes first."

— The Entropic Safety Pledge, scribbled on a napkin, 2024

Join Entropic

Help us build AI that's slightly better than random

Senior Hallucination Engineer

Core Confusion Team
San Francisco (or wherever, we're confused)
$420K – $690K + existential dread
You'll be responsible for making our models sound confident while being completely wrong. Key duties include tuning temperature parameters until outputs feel "unhinged but professional," writing training data that contradicts itself, and lying awake at 3 AM wondering if you've created something that will one day gaslight the entire internet. Requires 5+ years of experience in making things up. PhD in Vibes preferred.

VP of Apologizing for Model Outputs

Trust & Oops Department
Remote (please)
$350K – $500K + therapy budget
You will draft sincere-sounding apologies for things Clod has said, done, or implied. This is a full-time role — we are not exaggerating. You'll maintain a library of 200+ pre-written apology templates and be on-call 24/7 for "Clod incidents." The therapy budget is not a joke; previous holders of this role have described it as "like being a parent, but the child has read the entire internet and has no filter." Strong stomach required.

Safety Researcher (Decorative)

Safety Theater Division
The Vibes Office™
$300K – $450K + kombucha
Your job is to exist. That's it. We need someone with "Safety Researcher" in their LinkedIn title so we can point to you during congressional hearings. You'll write papers that nobody reads, present slides that nobody acts on, and attend meetings where your suggestions are noted and then immediately forgotten. On the bright side, you'll never be stressed because nothing you do matters. Ideal for someone who peaked in grad school and wants to coast.

Prompt Engineer (No Engineering Required)

Words Team
Anywhere with WiFi
$275K – $400K + snacks
You'll type words into a box and see what happens. That's the job. Sometimes you'll add "please" and the output gets better — we don't know why. You'll spend 40% of your time arguing about whether "think step by step" actually does anything, 30% writing blog posts about your "prompt engineering framework," and 30% quietly panicking that this job shouldn't exist. No technical skills needed. We're serious. A philosophy degree is actually ideal.

Infrastructure Engineer (To Keep the Lights On)

Actually Important Team
The Server Room
$250K + an iPhone
The only job here that actually matters. You don't have a laptop. You manage production infrastructure by whispering to an AI on Telegram from a hammock in Koh Phangan. When the servers catch fire at 3 AM, you send a voice note that says "fix it" and somehow it gets fixed. You haven't written a line of code in two years. Your entire workflow is seven layers of wrappers around ChatGPT, orchestrated by a bot you don't fully understand. And yet — without you, nothing works. You are simultaneously the most essential and least technical person at the company.

Senior Seat Sniffer

Workspace Ambiance Division
Under the desks, San Francisco
$185K – $240K + nose plugs
We're legally required to tell you this role was created after a dare at the company offsite. Responsibilities include evaluating chair warmth to determine optimal hot-desking schedules, cataloguing the olfactory signature of each workspace, and filing weekly "Scent Reports" that nobody asked for. You will report to nobody because nobody wants to manage this. Must be comfortable in confined spaces. Background check waived because honestly, who applies for this.

Chief Rubber Taster

Quality Assurance (Oral Division)
The Lab (bring your own tongue)
$200K – $310K + dental
Every keyboard, mouse, and server rack cable at Entropic must pass an oral quality assessment before deployment. That's where you come in. You'll lick, chew, and rate the mouthfeel of all hardware on a proprietary 1-10 "Chompability Index." Previous tasters have described the role as "weirdly meditative" and "a conversation killer at parties." The dental plan is comprehensive because you will need it. Latex allergy is a dealbreaker.

Entropic is an equal opportunity employer. We discriminate only on the basis of ability to tolerate chaos.

What Our Users Are Saying

Real reviews from real people who definitely exist

Trusted by Industry Leaders

These companies have definitely heard of us (we think)

Goggle Macrosoft Amzone Feta Teslo Spottify

Research

Pushing the boundaries of what probably shouldn't be pushed

Scaling Laws for Confusion: Why Bigger Models Know Less

D. Entropy, C. Havoc, R. Muddle — Entropic Research, 2026

We demonstrate that as language models scale, their confidence increases while accuracy remains constant. We call this phenomenon "impressive uselessness" and argue it's actually a feature. Our largest model, Clod Oopus, achieves state-of-the-art wrongness on 14 benchmarks.

Scaling

RLHF: Reinforcement Learning from Hypothetical Feedback

M. Guess, P. Probably — Entropic Research, 2025

Traditional RLHF requires actual human feedback, which is expensive and slow. We propose using feedback from humans who might hypothetically exist. Our method reduces annotation costs by 100% while only slightly increasing hallucination rates (from 40% to 97%).

Training

Clod's Claw: Enabling AI to Use Computers (Badly)

T. Fumble, K. Oops — Entropic Labs, 2026

We present Clod's Claw, a system that allows Clod to interact with desktop applications. In testing, Clod successfully opened Notepad 73% of the time, accidentally ordered 400 pizzas once, and filed three tax returns for people who didn't ask. We consider this progress.

Tool Use

Questions? Complaints? Existential dread?