one6
Scroll
02The Meaning
/ the founder

One.

One person, before the team, the tools, the clients. Kept going until there was a company. Has not stopped — and is not planning to.

/ the number

Six zeros.

Six zeros is what turns one into a million. That is the work in between — and the only part that actually matters.

/ the goal

A million jobs.

A million people earning through this platform. Not one-off tasks — steady, remote, human work.

That is the name. The rest is the work of earning it.

03The Collision
/ the dream

Before the company.

There was someone who liked helping people — any kind of helping. That instinct narrowed into one question: how do people find real work?

/ the field

The right problem.

AI cannot train without human input. That input is needed forever, from as many people as can do it well.

/ the collision

One 6.

We take that demand and make it a paycheck. Remote by default. Paid on time. Every week.

/ the work

The data behind the models.
The work behind the data.

For LabsFive disciplines, scoped, QA'd, delivered on your schema.
For ParticipantsReal tasks in your skill set. Paid every Friday. W-2 work, not gigs.
VX-01AR · UR · SW · TL · BN · ID +12

Voice.

Multilingual speech capture, transcription, and labeling.

Labs receive
  • Native-speaker audio across 18+ languages.
  • Transcripts with speaker, accent, emotion tags.
  • QA-reviewed, formatted to your schema.
You do
  • Read scripted prompts in your native language.
  • Transcribe noisy real-world audio.
  • Label accents, emotions, speaker turns.
CD-02Python · TS · Go · Rust · SQL

Code.

Writing, reviewing, and rating programmer output.

Labs receive
  • Human-written solutions with step traces.
  • Rubric-scored ratings on model output.
  • Adversarial tests targeting known failure modes.
You do
  • Solve programming tasks with step-by-step traces.
  • Rate model-generated code for correctness & style.
  • Write adversarial test cases against a spec.
function solve(n) {- return n * 2;+ return fib(n);}// rating: 4 / 5
MT-03Algebra · Calc · Linear · Discrete

Math.

Step-by-step reasoning and proof grading.

Labs receive
  • Clean chain-of-thought across difficulty tiers.
  • Model-output graded for logical validity.
  • Error taxonomies: arithmetic, logic, citation.
You do
  • Solve problems showing every intermediate step.
  • Grade model reasoning for logical errors.
  • Flag hallucinated theorems or arithmetic slips.
n² + 2n + 1(n + 1)²step 3 / 5 · check
LG-0440+ language pairs

Language.

Translation, annotation, and multilingual data.

Labs receive
  • Human translation with style & register notes.
  • Entity, sentiment, intent annotations at scale.
  • Fluency & fidelity scoring on model output.
You do
  • Translate between language pairs with style notes.
  • Annotate entities, sentiment, intent.
  • Grade translation fluency and fidelity.
EN »The river is patient.AR ».النهر صبور
EV-05Prose · Code · Dialogue · Tool-use

Eval.

Comparing and ranking model responses. RLHF at the seam.

Labs receive
  • Preference data with written rationale.
  • Safety & hallucination flags at scale.
  • Domain-specific rubrics, audited weekly.
You do
  • Rank two model answers by quality.
  • Explain why one response is preferred.
  • Flag unsafe, hallucinated, or off-topic output.
AB ✓rank: 4 / 5
THE JOURNEY · IN SEVEN STEPS

From your brief to a graded batch, in seven steps.

It starts with a brief from you. It ends with a batch delivered back to you, with a score printed on it that shows how many items missed the mark. In between, your work moves through seven steps. A standard we both sign. A pool trained against it. A first read. A second read. A final call on the edges. A quiet layer that catches drift. And a published miss rate that ships with every batch. No model touches your data. Every step happens in our environment, with people who have proven they can read your standard. Follow it, step by step.

STEP 01

The standard.

Before a single item is produced, your brief becomes a one-page standard we both sign.

We read your brief, ask what you actually mean by good, and write it back as a one-page rubric any reviewer can act on. It covers examples, edge cases, and a no-go list. You sign it before production. When a case resolves that changes the rule, the page is reissued, and the changelog ships attached to your next delivery.

one pagesigned before any item is produced
01
STEP 02

The pool.

A dedicated pool per category, cleared on your standard.

Every category runs its own closed pool: voice, code, math, language, evaluation. Every reviewer in that pool has cleared a live trial against the same standard we signed with you. The trial is real work, not a quiz. Only cleared reviewers are assigned your items.

per categorytrained on your standard
02
STEP 03

The first read.

Every item is graded against your rubric by a trained reviewer, in our environment, with no model involved.

No automation sees your data. Every item is graded in our environment, on our infrastructure, by a reviewer who has cleared the trial for this category. The rubric is tight enough that grading is decisive, not debate. Clean items move toward delivery. Doubtful ones, plus a sample of the rest, move to a second read.

100%items graded against your standard
03
STEP 04

The second read.

Anything doubtful, plus a sample of the rest, reaches a senior reviewer. Review effort follows risk.

A senior reviewer sees everything flagged on the first read, plus a random sample of clean items to keep the first read honest, plus more of both for any producer whose track record is new or drifting. The cost is light where the risk is light, and concentrated where it is not.

where it mattersmore second reads where the risk is higher
04
STEP 05

The final call.

Real edges escalate to whoever wrote the rule with you. Their decision ships, and it updates the standard.

A small share of items, the genuine edges, do not resolve on the second read. They escalate to one of the people who wrote the standard with you. Their call ships. It also rewrites the relevant line on the standard, so the same case never gets argued twice.

the authors decidethe people who wrote the rule make the call
05
STEP 06

The known answers.

Items we have already graded ourselves sit in every queue. Drift gets caught the same day.

Known-answer items sit in the queue looking like any other task. A reviewer cannot tell them apart. They tell us, in real time, whether a reviewer's reading of the standard is drifting. We use them to catch problems the day they happen, before they reach your batch.

every batchknown answers sit in the queue, mixed with real work
06
STEP 07

The receipt.

Every delivery publishes its miss rate against the standard. Per batch. Not averaged.

Before a batch leaves us, we audit a slice against the same signed standard. Items that miss are returned, redone, and resolved inside the same cycle. The miss rate ships alongside the delivery, with the items that missed and why. You never get a quiet pass.

per batchmiss rate and examples ship with the delivery
07
In short
1 rubric
signed by you and us, before any item is produced
100% human-read
every item graded by a trained reviewer, in our environment
0 models
no model reads your data, at any step
the authors decide
the people who wrote the rule make the call on every edge case
per batch
a miss rate ships with every delivery, with the items that missed
07Two Doors

Pick the doorthat's yours.

one601workersStart application02labs & clientsRequest a conversation
For workers

Reviewers, writers, domain experts. You read carefully and you want that to count. We hire for judgment and we pay for it.

For AI labs & clients

Reasoning, code, writing, domain work. You bring the rubric and the edge cases. We deliver the work, graded, with the misses in the envelope.