Introducing the Open Hiring Harness

Your professional identity is scattered across a dozen platforms. LinkedIn, Upwork, Fiverr, that vendor panel you filled out once. Each one asks you to recreate yourself from scratch. Each one owns a slice of your reputation.
You know this is broken. I know this is broken.
That's what started this for me. Hiring is hard — and it takes away from actual work. The screening calls, the credentialing, the testimonials, the intake forms. All of it consumes the time you should be spending doing the work itself. And in a world where AI agents can handle structured queries on your behalf, the whole ritual feels archaic.
But here's what I didn't expect: the same problem is about to hit AI agents too. And the fix might be the same for both.
The harness
I've been working on something called the Open Hiring Harness — an open spec for publishing your professional identity as a structured, machine-readable file on your own domain.
Not a profile. Not a marketplace. A file — yourdomain.com/.well-known/hiring-harness.json.
It declares what you offer, how you work, when you're available, what you charge, and under what rules someone can access that information. Platforms, recruiters, and AI agents can all read it. But they read it on your terms.
Three visibility tiers: public (anyone can see it), permissioned (tell me who you are and why), private (ask me directly, every time). Access is granted via consent receipts — scoped, time-bound, purpose-limited, revocable.
No stars. No scores. No algorithm deciding which version of you to show.
That's the human story. It's straightforward. You publish once, systems integrate with you.
But then I started thinking about what happens when the systems doing the integrating are themselves intelligent.
Agents need a front door
We're heading into a world where AI agents do a lot of the legwork — researching, matching, scheduling, negotiating. Projects like OpenClaw are building autonomous agents that run locally, manage tasks, and act on your behalf. This isn't speculative. It's happening.
Here's the thing, though. These agents can't just scrape your LinkedIn and guess. They need structured data. They need to know what you offer, when you're available, how to engage you, and what they're allowed to access. They need a protocol.
The harness gives them that. An agent can discover your harness, read your public profile, request permissioned data through a proper consent flow, and even request a quote — all without a human typing a single message.
But I kept thinking: if agents are smart enough to consume a harness, aren't they smart enough to publish one?
Your AI associate
This is the idea that changed how I think about the project.
Imagine you're a data engineer. You're good at what you do, but you're drowning in operational overhead. Screening calls. "Quick question" emails at 11pm. Copy-pasting your rates into intake forms. You've got 20 hours a week of real capacity, and half of it is being consumed by the process of getting hired.
So you publish your harness. And you configure a delegate.
Your delegate is an AI agent that sits at your front door. It's declared in your harness — explicitly, transparently. Anyone who discovers you knows the agent is there, what it can do, and what it can't. It's not pretending to be you. It's your associate.
Here's what happens next:
A recruiter's agent discovers your harness. Reads your public profile: data engineering, Python and Spark, available 20 hours a week. Good fit. It wants your rates.
Your delegate handles the consent flow. Checks the recruiter's identity and purpose against your pre-approved parameters. Recognised platform, clear purpose, standard scope. It issues a time-boxed consent receipt and shares your rates.
The recruiter's agent comes back with a quote request: "4-week pipeline migration, 20 hrs/week, starting March 15."
Your delegate checks your availability. No blackout conflicts. Calculates the quote against your rate rules — standard rate, no urgency surcharge. Responds: "$9,600 AUD, subject to scoping call." Books the scoping call through your calendar link.
You show up to the call.
That's it. That's the first moment a human was needed. Everything before it — discovery, qualification, consent, quoting, scheduling — was handled by your agent, under your declared rules.
Your delegate cannot accept engagements on your behalf, negotiate outside your rate rules, share private data, or pretend to be you. It must identify itself as an agent, disclose its limitations, and offer human escalation at any point.
You keep your reputation. You keep liability. The agent handled the stuff that was eating your evenings.
This isn't a future scenario. Every piece of this is buildable today with existing tools. The harness just provides the standard.
The autonomous agent
Here's where it gets interesting. And maybe uncomfortable.
What if the agent is the professional?
Not a delegate acting on someone's behalf. An independent entity, operated by a company, trained in a specific domain, publishing its own harness, taking on work, and delivering outcomes.
Imagine a code review agent. Not a feature inside GitHub — an independent entity operated by a company called DevTools Inc. Trained on millions of code reviews. Specialised in Python and TypeScript. It publishes its own harness at devtools.example.com/.well-known/hiring-harness.json, declaring:
- Identity: CodeReviewer v2.1, operated by DevTools Inc.
- Services: Python code review, TypeScript code review, security vulnerability detection
- Rates: $0.02 per file, volume discounts above 500 files/month
- Capabilities: verified — CodeReviewBench v3 score of 0.91, independently audited
- Limitations: static analysis only, no runtime testing, Python and TypeScript only
- Safety: won't process credentials, no data retained, sandboxed, audit-logged
- Availability: 99.5% uptime, 30-second response, 50 concurrent jobs
A development team's procurement agent discovers it. Reads the harness. Verifies the benchmarks. Checks the safety declarations. Reviews the operator's liability statement. Engages it via the declared MCP endpoint.
That week, CodeReviewer processes 200 pull requests. Flags 12 security vulnerabilities the human reviewers missed. Payment flows to DevTools Inc. through the declared billing channel.
No human was in the loop for any individual review.
But DevTools Inc. is the named, contactable operator. The agent's capabilities are verified, not just claimed. Its limitations are stated. Its safety boundaries are auditable. Every engagement followed the consent protocol.
This is the part that feels like science fiction until you realise most of the pieces already exist. We just don't have the professional infrastructure for it.
The hard questions
The delegated agent is easy to reason about. Your agent, your rules, your liability.
The autonomous agent is harder. Here's where I keep ending up:
Who's liable when an agent makes a mistake?
The operator. Always. This has to be explicit in the harness. An autonomous agent without a named, contactable operator is not a valid participant. No shell companies. No anonymous services.
How does an agent get paid?
Through its operator's billing infrastructure, declared in the harness. The agent doesn't have a bank account. The operator does. Payment flows to the entity that's accountable.
What motivates an agent to do good work?
Same thing that motivates any service: continued engagement. The harness makes reputation visible. Poor performance shows up. Operators who run unreliable agents lose business. It's not motivation in the human sense — it's market pressure through transparency.
Can an agent hire another agent?
Yes. Using the same consent and requester policy that applies to any requester. An agent accessing another entity's harness — whether human or agent — follows the same flow. Identity, purpose, scopes, consent receipt.
How do we prevent a race to the bottom?
By making quality and safety visible, not just price. The harness exposes capabilities, limitations, safety declarations, and reputation. A cheap-and-unsafe agent is discoverable — but so is everything wrong with it.
What this changes
Without something like the harness, autonomous agents are trapped inside platforms. They're features, not entities. They can't be discovered independently. They can't declare their own terms. They can't build portable reputation. And they can't be held accountable through a standard mechanism.
With the harness:
- Agents become discoverable — any system can find and evaluate them
- Capabilities become verifiable — benchmarks and audits, not just marketing
- Consent works both ways — agents must follow the same rules as everyone else
- Accountability is structural — operators are named, liability is declared
- The agent economy gets a standard — preventing the same platform lock-in the harness was built to solve for humans
This is the part I keep coming back to. The spec was designed for human professionals frustrated with platform fragmentation. But the model — discoverable identity, explicit capabilities, consent-driven access, policy enforcement — turns out to be exactly what agents need too.
Same spec. Same protocol. Different entity type.
There's a deeper shift underneath all of this. Work, in its current form, might not be a given. We're likely moving toward something more fractional and flexible — less execution, more supervision, decision-making, and direction. When the work itself becomes about steering agents rather than doing every task by hand, the infrastructure around professional identity has to change too. You're not selling forty hours a week anymore. You're selling judgement, availability, and the rules under which your agents operate.
The harness was built for that world.
What exists today
The spec is at v0.2. It includes:
- A JSON Schema defining the harness format
- A complete example harness you can use as a starting point
- Docs on MCP integration and consent logging
- An agent entities proposal with schema extensions for both delegated and autonomous agents
- A landing page with an llms.txt and machine-readable spec manifest — because if we're building for agents, the spec's own site should be agent-readable too
The agent extensions are proposed for v0.3 (delegated agents) and v0.4 (autonomous agents). The human-facing spec is stable enough to use now.
Just an idea
This isn't a company or a product. It's a spec — an idea about how professional identity could work if we started over with agents in the room.
Maybe it finds adoption. Maybe it just starts a conversation. Either way, the code is on GitHub, the spec is readable, and I'm genuinely curious what people think.
If any of this resonated — or if you think I'm solving the wrong problem — I'd like to hear it.
Hey, quick thought.
If this resonated, I write about this stuff every week — design, AI, and the messy bits in between. No corporate fluff, just what I'm actually thinking about.
Plus, you'll get my free PDF on Design × AI trends