Careers in IBM i. Deep work in AI. We help your team make sense of what's coming out of the AI world — what's real, what's ready, and how any of it applies to the systems actually running your business.
Careers at the architecture, solution-design, and integration level — across financial services, insurance, and government.
Deep work in neuro-symbolic architectures, agent design, and compiler-verified generation. Not theoretical.
We help your team separate AI signal from noise, and apply what's actually ready to the systems you run.
The AI vendors don't really know IBM i. The IBM i specialists aren't building with AI. We've spent careers in IBM i at the architecture and solution-design level — and we went deep into AI specifically to understand how these two worlds actually fit together.
Neuro-symbolic architectures, agent design, formal guardrails, compiler-verified generation. We did the work because our clients' systems are the ones where AI failure isn't acceptable. When your team is trying to figure out which of the hundred things launching each month is worth paying attention to — and how any of it applies to a platform the LLM vendors don't speak — we can help them separate signal from noise.
Platform architecture. High-availability design. Integration strategy. The constraints of systems that can't afford to be wrong.
Applied work in the techniques that make AI safe to run in enterprise environments — not the consumer demos.
Every few months, new "AI-powered" modernization tools appear — each promising to transform your legacy codebase with minimal risk and maximum speed. The cloud giants are aggressive because your workloads are a multi-billion-dollar revenue opportunity. Some of what's out there is genuinely useful. A lot of it isn't. And none of it is built by people who actually understand your platform.
Excessive manual effort. Struggle with complex legacy. Slow, brittle, specialist-heavy.
Models invent variable names, drop fields, and produce code that compiles but quietly means the wrong thing.
The developers who built your RPG and COBOL are leaving. The logic lives in their heads — not in documentation.
The cloud platforms and AI vendors don't understand your platform. Your IBM i specialists don't track what's shipping in AI every week. Someone has to sit between those two worlds and help your team decide what's worth adopting — and where the risk is worth the return.
One example of what "understanding both worlds" looks like in practice. Symbolic techniques give deterministic understanding of your code. AI handles the parts that genuinely require judgment. Neither layer does the other's job — and this is the lens we bring to any engagement, not a product we sell.
Haskell · Abstract Syntax Trees · Formal logic
A high-assurance parser reads your RPG and COBOL into a structured map of the program. Control flow, data structures, and dependencies are captured exactly. Anything that can't be resolved with certainty is flagged as Unknown — not guessed.
Fine-tuned models · LLMs · Pattern recognition
AI is brought in only where ambiguity was flagged. A specialist model recognizes patterns (naming conventions, UI cues, common idioms). A generative model produces the artifact the engagement calls for — constrained by the structure the symbolic layer already locked in.
Symbolic constrains what neural can do. Neural extends what symbolic can understand.
The method reads logic directly from the code — not from institutional memory that's about to retire.
We built this to show what "both worlds together" actually looks like in code — not on a slide. It's the engine behind audits, API work, and agent foundations. Symbolic layers bookend the neural layers: a sandwich architecture for systems you can't afford to get wrong.
A Haskell parser reads your RPG and COBOL into a structured map of the program — an Abstract Syntax Tree. Known logic (control flow, data structures, dependencies) is captured deterministically. Anything that can't be resolved with certainty is flagged as Unknown, not guessed.
A solid, deterministic foundation before any AI is allowed near the code. No "garbage in, garbage out."
A small, fine-tuned model looks only at what was flagged. It reads intent — naming conventions, UI cues, structural patterns — to resolve ambiguity deliberately. The kind of decision-making that usually needs a developer to review line-by-line.
Automates the judgment calls that normally depend on a specialist who's been in the code for years — and may be close to retiring.
Once the code is fully understood and tagged, an LLM generates whatever the engagement calls for — an API endpoint, a tool signature for an agent, a refactored module, or modern equivalent code. The LLM is working against a strict specification, not improvising.
LLM speed and syntax fluency, applied only after the logic is strictly defined. Errors drop, variables are preserved.
The compiler judges every output. If compilation fails, the error is fed back to the model for self-correction — reinforcement learning from compiler feedback (RLCF). The loop runs until the output is syntactically correct and every original variable is accounted for.
Automated self-healing. Final deliverables are complete and compile — not "probably correct."
A high-assurance language manages the workflow. Stability at the core of the system.
Business logic is automatically distinguished from UI code — one of the harder problems in IBM i modernization.
The compiler checks every variable. AI can't quietly drop fields or invent names.
Take them independently, or in sequence. The right starting point depends on where your team is today and where the pressure's coming from.
We use the pipeline to map your RPG and COBOL programs and identify where AI would deliver practical value — claims review, pricing analysis, customer-service summarization, risk scoring. You leave with a prioritized plan your team can act on, plus a documented read of what's actually in the code.
APIs built without the experts in the room. Your RPG and COBOL runs as it always has. We read it, wrap it, and layer AI on the response — so modern applications receive finished insights, and the institutional knowledge lives in the API layer instead of in a few peoples' heads.
We structure your IBM i logic so it can serve as a reliable foundation for enterprise AI agents. Formal guardrails prevent hallucination in production. Your team governs and extends the architecture over time — with the method, not a vendor, at the core.
Your business rules are parsed, documented, and exposed. No longer trapped inside green-screen layers or a shrinking pool of specialists' heads.
Compiler-verified. Variable-accurate. Formally constrained. Your governance and audit teams get artifacts that hold up to review.
The logic encoded in your IBM i is captured, documented, and exposed as code — not as tribal knowledge that leaves when a senior developer does.
Changes to your IBM i environment are understood, tested, and documented before they go in. Continuity risk comes down over time, not up.
Share a bit about your environment and the problems you're trying to solve. We'll come back with a candid view of what's realistic and where we'd focus. No obligation, no sales process.