Vertex
Capability Catalogue · 2026
A catalogue of what Vertex does

Making operations run better — end to end.

From understanding how work actually happens today, to designing how it should happen tomorrow, to building the lightweight systems and tools that close the gap between the two.

00 · Overview

What Vertex does

The work sits in three overlapping areas — designed to connect, sequenced to compound.

01 Analyse 02 Engineer 03 Build Operations that run better UNDERSTAND CONNECT CAPTURE
A typical engagement moves left to right — but any competency can be engaged on its own.
01 · Illustrative example

Sunrise Retail Corporation

A fictional company that makes the catalogue concrete.

The engagement pattern described below reflects real work Vertex has delivered. Sunrise Retail Corporation is a fictional company used to illustrate how the competencies combine in practice.

Sunrise Retail Corporation operates 7 branches across Metro Manila. Month-end reporting took their finance team nine working days. Branch supervisors emailed spreadsheets to a central coordinator, who consolidated them manually; errors were common, and by the time leadership saw the numbers, the month was nearly two weeks gone.

  1. Step 01 · Process Documentation

    Mapped how data actually moved.

    Interviews and observation showed the same data was being re-keyed three times across the chain — once at the branch POS, once in a branch-level spreadsheet, once in the consolidation workbook.

    Activity 1.1
  2. Step 02 · Gap & Risk Analysis

    Found the single highest-leverage fix.

    One coordinator was a single point of failure, and the re-keying introduced an average of 40 errors per month that had to be hunted down during close. That was the step to eliminate first.

    Activity 1.2
  3. Step 03 · To-Be Design + Requirements

    Designed a direct path to the dashboard.

    A direct flow from POS exports to a central data store, with branch supervisors reviewing a dashboard instead of filling a spreadsheet. The Odoo implementer received a specification for the POS-to-Odoo sync with data ownership clearly defined.

    Activities 1.3 & 1.4
  4. Step 04 · Data Engineering

    Automated the coordinator's manual job.

    Built the ingestion and transformation pipeline for branches whose POS data needed parsing before it could reach Odoo, plus the serving layer that powered the new dashboard — so finance worked from automated numbers instead of a consolidation workbook.

    Activities 2.1–2.4
  5. Step 05 · Digital Workflow Tool

    Captured the last-mile exceptions cleanly.

    A small web form replaced the old spreadsheet for the handful of exceptions the system couldn't auto-capture. Validated entries, audit trail, direct write to the central store — so even the manual data entered the system clean.

    Activities 3.1–3.2
9 → 2days to close
~40monthly errors eliminated
Weeklynumbers, not monthly
The consolidation role was redeployed to analysis work. Sunrise's leadership got weekly numbers instead of monthly ones.
02 · Competency One

Process Improvement & Business Requirements

The foundation — understanding how work currently happens, where it breaks down, and what "better" should look like before any system or tool is chosen.

1.1
Process Documentation
When teams describe the same process differently, or leadership can't confidently answer "how do we actually do X?"
When to engage this
Teams describe the same process differently, onboarding new staff takes unusually long, leadership can't confidently answer "how do we actually do X?", or a system implementation is being planned and nobody has written down how the business currently works.
Description
A detailed walkthrough of an existing business process to capture the tasks, actors, systems, inputs, and outputs involved. Conducted through interviews, observation, and artifact review (forms, spreadsheets, reports already in use). The goal is to produce a shared, accurate picture of how work actually happens today — not how it's supposed to happen on paper.
Expected outputs
  • Level 3 and Level 4 process maps (swimlane format)
  • Actor and system inventory per process
  • List of manual handoffs, workarounds, and known pain points
  • Process narrative document for stakeholder sign-off
Potential benefits
Leadership and operations teams can make decisions from the same map of reality. Implementation partners receive a brief they can actually build against. Risks and inefficiencies that were previously invisible become addressable, and the documentation itself becomes a training asset that survives staff turnover.
1.2
Process Gap & Risk Analysis
When the organization knows things could run better but doesn't know where to start.
When to engage this
The organization knows things could run better but doesn't know where to start, improvement initiatives have stalled because everyone has a different priority, or leadership needs a defensible basis for sequencing investment.
Description
A structured review of documented processes to identify inefficiencies, control gaps, duplication, and failure points. Findings are categorized by impact (cost, risk, customer experience, scalability) and effort to address, producing a prioritized view of where intervention matters most.
Expected outputs
  • Gap register with severity and frequency ratings
  • Risk matrix (likelihood × impact)
  • Prioritized opportunity list with indicative effort estimates
  • Quick-win recommendations separated from longer-term redesign candidates
Potential benefits
A clear, evidence-backed answer to "what should we fix first?" Investment decisions become easier to justify to boards and owners. Quick wins build momentum and credibility for the larger improvement program.
1.3
To-Be Process Design
When a system is about to be implemented and the business design needs to be settled before configuration starts.
When to engage this
An ERP, POS, or workflow system is about to be implemented and the client wants the business design settled before configuration starts; or a known pain point has been prioritized and the organization is ready to commit to a redesign.
Description
Design of target-state processes that address the gaps identified, incorporating automation opportunities, clearer ownership, and built-in controls. Produced collaboratively with process owners so the design is realistic and has buy-in before implementation begins.
Expected outputs
  • Target-state process maps (Level 3 and 4)
  • Change impact summary (what's different, who's affected)
  • Control points and exception-handling paths
  • Transition considerations (data, training, cutover)
Potential benefits
Minimizes rework during implementation caused by misaligned expectations between the business and the delivery team. Change management becomes easier because affected staff were part of the design. The business enters implementation with alignment already achieved on how things should work, rather than surfacing disagreements mid-build.
1.4
Business Requirements Gathering
Before signing an implementation SOW, or when a previous implementation underdelivered because requirements were vague.
When to engage this
Before signing an implementation SOW; when a previous implementation underdelivered because requirements were vague; or when a vendor is asking for clarification faster than the business can provide it.
Description
Elicitation and documentation of functional and non-functional requirements in a form suitable for implementation partners (Odoo, custom development, integration vendors). Translates business intent into structured requirements with acceptance criteria, avoiding the common failure mode where implementers build the wrong thing from a vague brief.
Expected outputs
  • Business Requirements Document (BRD)
  • Functional requirements with acceptance criteria
  • Data requirements and master data definitions
  • Reporting and KPI requirements
  • Integration and interface requirements
Potential benefits
Fewer scope disputes during implementation. Shorter clarification loops between business and technical teams. Requirements survive the people who wrote them, so handover and later phases don't start from zero.
1.5
Process Framework Development
For multi-branch operations expanding, or experiencing inconsistent performance across locations.
When to engage this
The business is expanding locations, preparing for a system implementation across multiple branches or entities, or experiencing inconsistent customer experience and operational performance across branches.
Description
For multi-branch or multi-entity operations, development of a standardized process framework that defines how a family of processes (e.g., branch-level inventory, order fulfillment, cash handling) should operate consistently across locations. Includes process owners, triggers, step-by-step tables, exceptions, and the controls that keep variation in check.
Expected outputs
  • Process framework document covering all in-scope processes
  • Standard operating procedures (SOPs) per process
  • RACI matrix across roles
  • Exception handling and escalation paths
Potential benefits
Branches operate consistently, which makes performance comparable and coaching easier. New locations launch faster because the playbook already exists. Any future system rollout has a clean operational foundation to configure against.
03 · Competency Two

Data Engineering

Building the pipelines and data infrastructure that turn scattered, inconsistent source data into trustworthy information the business can actually run on.

2.1
Data Source Assessment & Ingestion Design
When data is scattered across systems and pulling it together for reporting is painful or manual.
When to engage this
Data is spread across systems (POS, ERP, email attachments, shared drives, spreadsheets) and pulling it together for reporting is painful or manual; a new system is being introduced and existing data needs to flow into or out of it; or reporting cycles are slow because data collection itself is slow.
Description
A review of where the business's operational data is generated and how it currently moves — or fails to move — toward the places where decisions get made. Produces a design for how each source will be reliably ingested, including the mechanics (API, file drop, email parsing, database sync), the frequency, and the handling of failures and duplicates.
Expected outputs
  • Source system inventory with data ownership and access method per source
  • Ingestion design document per source (mechanism, frequency, error handling)
  • Data contracts defining what each source is expected to deliver and in what shape
  • Deduplication, retry, and dead-letter handling approach
Potential benefits
Data arrives where it's needed on a predictable schedule without human babysitting. Failures surface as alerts rather than as missing numbers discovered during close. The business stops losing hours each week to copy-paste work and chasing attachments.
2.2
Data Pipeline Development
When existing pipelines are fragile, undocumented, or break every time a source system changes.
When to engage this
The ingestion design exists (from 2.1 or from an implementation partner's architecture) and the pipelines now need to be built; or existing pipelines are fragile, undocumented, or break every time a source system changes.
Description
Implementation of the pipelines that move data from source systems through transformation into a clean, queryable state. Built with modern, Python-native tooling and deployed on managed platforms so the infrastructure stays lightweight and maintainable. Includes logging, monitoring, and an audit layer so every record can be traced from source to serving.
Expected outputs
  • Working data pipelines for each in-scope source
  • Transformation logic documented alongside the code
  • Audit table capturing load history, record counts, and failures
  • Monitoring and alerting for pipeline health
  • Deployment on a managed platform with version control
Potential benefits
Reporting stops depending on one person remembering to run a script. Data lineage is traceable, so when a number looks wrong, finding the cause takes minutes instead of days. The pipelines survive staff changes because they're documented and version-controlled.
2.3
Data Serving & Dashboard Development
When data is flowing but stakeholders still can't easily see what it's telling them.
When to engage this
Data is now flowing into a central store but stakeholders still can't easily see what it's telling them; leadership is asking for numbers that exist but aren't surfaced; or existing dashboards are built on fragile spreadsheets that break when the underlying data shifts.
Description
Development of the serving layer that makes engineered data accessible to its consumers — whether that's a finance team, branch managers, or leadership. Built with tools matched to the audience and the question being answered: custom interactive tools for internal workflows, standard BI dashboards for recurring reporting. Emphasis is on clarity and trustworthiness of the view, not on feature count.
Expected outputs
  • Dashboards or reporting views aligned to specific decision-making needs
  • Documentation of what each metric means and where it comes from
  • User access and permission setup
  • Handover session so internal staff can maintain simple changes themselves
Potential benefits
Leadership sees the numbers they need when they need them, not days later. Decisions get made from the same data everyone is looking at. Dashboards become trusted enough that they actually get used, rather than quietly ignored in favor of a familiar spreadsheet.
2.4
System Integration
When two or more systems need to share data but currently rely on exports, imports, or manual re-entry.
When to engage this
Two or more systems need to share data (e.g., POS to ERP, ERP to a marketing platform, e-commerce to inventory) but currently rely on exports, imports, or manual re-entry; a new system is being implemented and its data needs to reach or be reached from existing tools.
Description
Design and build of integrations between systems so that data flows automatically, in the right direction, with the right transformations, and with visibility into what succeeded and what failed. Scope includes defining what data is authoritative in which system, how conflicts are resolved, and how the integration behaves when either side is unavailable.
Expected outputs
  • Integration architecture document (source, target, direction, frequency, trigger)
  • Working integrations with error handling and retry logic
  • Master data governance rules (which system owns what)
  • Monitoring and alerting on integration health
Potential benefits
Eliminates entire categories of manual work (exports, re-entry, reconciliation). Reduces the errors that come with human copying. Lets each system do what it's best at while the business sees a unified operational picture.
04 · Competency Three

Digital Workflow Tool Development

Building small, purpose-built web applications that replace manual, error-prone data entry with validated, system-captured workflows — closing the last-mile gap where spreadsheets and paper forms still live.

3.1
Workflow Tool Scoping & Design
When a spreadsheet-based process is causing errors, but a full software solution would be overkill.
When to engage this
A business process still depends on spreadsheets, paper forms, or email-based submissions that are causing errors or delays; a full software solution would be overkill but the status quo is costing time and accuracy; or a data pipeline is being built and a clean way to capture the remaining manual inputs is needed.
Description
A focused scoping exercise to define what a small web tool needs to do, who uses it, what data it captures, and how it fits into the surrounding processes and data systems. The deliberate intent is to keep scope small and purpose clear — these are not platforms, they are precision tools. Produces a specification concrete enough to build against without becoming an IT project.
Expected outputs
  • Tool purpose statement (what it replaces, who uses it, what success looks like)
  • User flow and screen-level specification
  • Data model and validation rules
  • Integration points (where the captured data goes next)
Potential benefits
Keeps the tool focused on the one job that justifies its existence. Prevents scope drift that turns a two-week build into a six-month project. Gives the client a clear view of what they're getting before any code is written.
3.2
Tool Development & Deployment
When a scoped tool is ready to be built and the business wants it in hand within weeks, not months.
When to engage this
A scoped tool is ready to be built; or an existing spreadsheet-based workflow has been identified as a high-value candidate for digitization and the business wants a working tool in hand within weeks rather than months.
Description
Development and deployment of the web application as specified. Built on a pragmatic stack (lightweight Python web framework with a managed backing data store, deployed on a managed platform) to keep infrastructure overhead minimal. Includes validation, authentication, and an audit trail — the kind of controls that a spreadsheet cannot offer.
Expected outputs
  • Deployed web application accessible to intended users
  • Source code in a version-controlled repository
  • User access and authentication setup
  • Basic user guide or walkthrough for first-time users
  • Handover documentation
Potential benefits
Manual data entry becomes structured, validated, and auditable. The data flows directly into the broader data system rather than sitting in a spreadsheet someone has to remember to send. Users get a tool that fits their job instead of fighting a spreadsheet that wasn't designed for it.
3.3
Iteration & Handover
When a tool is in use and real-world feedback has surfaced refinements — or the client is ready to take maintenance in-house.
When to engage this
A tool is in use and real-world feedback has surfaced rough edges, missing fields, or new use cases; or the client is ready to take ongoing maintenance in-house and needs a clean handover.
Description
A short, time-boxed iteration phase after initial deployment to refine the tool based on how it's actually being used, followed by a structured handover. The iteration window matters because spreadsheet-replacement tools almost always reveal needs that weren't visible until real users were working with real data.
Expected outputs
  • Refined tool addressing post-deployment findings
  • Updated documentation reflecting the final state
  • Handover session with whoever will maintain the tool going forward
  • Recommended next steps (if any)
Potential benefits
The tool reaches the point where it's genuinely trusted by its users, not just deployed. Clients gain the ability to maintain or extend the tool on their own terms, rather than being locked into ongoing dependency.
05 · Appendix

Technology & tooling reference

The tools below are the current default stack. Choices are pragmatic and reviewed per engagement — the goal is always to match the tool to the problem, not the other way around.

View the stack
Orchestration & pipelines
Prefect for orchestration (chosen over Airflow for its Python-native ergonomics), dlt for lightweight ingestion, Python as the primary language.
Storage & serving
Supabase as the primary backing store for application and analytical data. Streamlit for custom internal tools and analyst-facing dashboards. Metabase or Evidence.dev for standard BI reporting.
Integration & automation
Google Cloud Pub/Sub for event-driven ingestion from Gmail, Google Drive as a shared intake layer where appropriate, custom Python for parsing and transformation.
Deployment & ops
GitHub for source control, Railway or Render for managed deployment of pipelines and small applications, with monitoring and alerting built into each deployment.
Working methods
AI-assisted development (Claude Code and related tooling) is used throughout to accelerate delivery — particularly for integration work, data transformation logic, and tool scaffolding. This compresses timelines and reduces cost without compromising the underlying engineering discipline.