Skip to content

Reference engagement

Five weeks. Four printers. Sixteen-to-eighty labels per minute.

LBS Distribution, West Sacramento. The reference engagement Phenominal was built on. LBS leadership granted me permission to talk publicly about this work. Engagement dates: 2026-03-23 through 2026-04-21.

At a glance

Engagement outcomes.

Sustained label rate

Before: 16–17 labels/min, with frequent communication errors and print-limit failures.
After: 80 labels/min sustained on a single operator, with ~$10K/yr of dependencies optionally retired from the critical path.

Spooling time on a 3,700-label job

Before: 5+ hours through Acumatica → BarTender → hosted server → printer.
After: under one minute via direct ZPL over LAN.

Error recovery time per stoppage

Before: 30–45 minutes, with operators burning physical labels for visual validation.
After: ~30 seconds, with the failure mode visible in the operator's browser before recovery starts.

Simultaneous machine count

Before: never achieved — license caps and spooling ceilings kept the fleet partially idle.
After: all four labeling machines running production simultaneously from April 7 onward.

Operator footprint of the new pipeline

Before: 0 machines on the new pipeline.
After: 7 machines, every operator role and every printer line covered, by end of engagement (April 21).

Context

What was happening when I arrived.

Production was stalled. Pre-engagement run-log baselines show known but unresolved errors — E1000 (“print signal sent too close together”), E1005 (“print limit exceeded”), E4010, “communication loss” — treated as printer-side faults and chased through Videojet support without resolution. Spooling speed had been blamed on disk space, then SQL Express, then BarTender configuration, then Acumatica throttling, then the hosted print server. Each fix was tried; each landed; the floor stayed at roughly sixteen labels per minute.

Operators had developed a workaround — burning physical labels for visual validation before committing to a full run — because the system had no pre-print preview surface. The workaround had become normal. It was wasting product, but more importantly, it was masking the real failure: nobody had visibility into the field-map state on a per-run basis.

The integration vendors were each insisting their component was working as designed. They were each correct. The problem lived between them, in handoffs no single party owned.

Method

What I did, in order.

  1. Day 1 (March 23) — floor walkthrough. Traced the Acumatica → BarTender → DataFlex chain end-to-end with the operator. Found a wrong port number in the integration config the same morning — the vendor's deployment had passed checklists but was pointing at port 8082 incorrectly. Same day: tied E1000 to a conveyor-belt-versus-laser-detection timing mismatch and got it adjusted on the spot. First stable Orange Tree run hit 3,539 labels at 20/min, three times the previous best.
  2. Days 2–3 — trace failures to root cause, document for vendors. Sustained an 80–100/min single-operator rate on the Orange Tree remainder once the buffer established. Traced the 8,000-label failure pattern to a 1024 MB SQL Express ceiling on BarTender Pro — default verbose logging was eating ~90% of the database before label data started loading. Escalated to the print-server vendor with the evidence they needed to fix what was theirs. Built a custom bulk-uncheck script to preserve accounting integrity on failed mid-batch runs.
  3. Days 4–7 — root cause forced into view. Spooling regressed back to 16–17/min on March 26. Vendors blamed each other. On the April 1 perception-vs-reality blind test, the operations team reported Acumatica had “slowed down” due to spooling — spooling had been off for an hour. On April 3, on the joint vendor call, I drove the room to the math: the Acumatica-to-BarTender integration sends one HTTP request per label. 5,000 labels × ~3.5 seconds round trip, sequential, equals 16–17 per minute exactly. Every prior fix had been correct in scope and irrelevant to the bottleneck.
  4. Days 8–14 — build only what had to be built. Stopped waiting for the integration vendor's batch-dispatch redesign. Built a Chrome extension augmenting Acumatica with pre-print preview and direct-to-printer dispatch. April 6: a 3,700-label job that previously took 5+ hours spooled in under a minute. April 7: discovered DataFlex 6330 printers expose ZPL Emulation on TCP port 9001 — raw ZPL over LAN, no print-server middleware required. April 9: end-to-end direct ZPL confirmed on all four printers.
  5. Days 15–30 — warehouse rollout, fleet tuning, and SOPs. April 15: live in production on three of four printers. First full operator-driven production day. April 16–17: per-printer workstation queue architecture across four Dell Latitudes, recovered from server-room inventory after a departed managed-service provider had left them locked out. April 20–21: fleet-wide CLARiTY config diff caught uniform drift on RecordBufferMaximum at factory default 1000 (schema ceiling 10,000); bumped fleet-wide. SOPs 52, 53, 54 formalized from tribal knowledge. End-of-engagement: all four printers running at full capacity, zero errors, seven workstations on the new pipeline.

Deliverables

The technical work, named honestly.

  • Bacumatica — Chrome extension augmenting Acumatica with a pre-print preview surface and a direct-to-printer dispatch path. Manifest V3 service worker, isolated and main-world content scripts, JSON template format replacing six existing .btw templates.
  • Direct-to-printer ZPL pipeline over TCP port 9001 (DataFlex ZPL Emulation), bypassing the BarTender server and the hosted print-server dependency entirely. QZ Tray (open-source, LGPL) as browser-to-printer transport.
  • Per-printer workstation routing across 4 Dell Latitudes, hard-bound per station. Recovered from server-room inventory; three wiped and reimaged, one set aside with a shorted board. $229 USB WiFi adapter receipt invoiced separately.
  • CLARiTY tuning across the Videojet DataFlex 6330 fleet. Per-printer drift caught (JobUpdateQueue at 1 vs. tuned 20 on a sister printer); fleet-wide drift caught (RecordBufferMaximum at 1000 vs. schema-recommended 10,000).
  • ZPL renderer absorbing seven distinct DataFlex quirks: ^FN accumulation across stored-format recalls (resolved by emitting full ZPL per label), ^BY persistence (forced reset before every barcode), required ^PQ1,0,1,Y on every label (otherwise last label becomes a standing job), ^GB rounded-corner silent ignore (rendered to bitmap), font cap-height scaling on TTF substitution, border-thickness mismatch between editor and ZPL, printer trimming bounding box to ink extent (handled with invisible black corner pixels).
  • SOPs 52, 53, and 54 — SOP-52 escalation matrix for the new pipeline; SOP-53 row-number recovery procedure; SOP-54 printhead-freeze recovery for Peter Griffin. All formalized from on-floor tribal knowledge.

What this proves at the practice level

The diagnostic process is reproducible. The work followed a structured sequence — observe, document for vendors, give vendors the chance to own their gap, build only when waiting becomes more expensive than building — that is portable to any distributor with the same regulatory exposure. The April 3 root-cause moment didn't require deep cannabis-industry expertise. It required end-to-end visibility across four vendors and the willingness to do arithmetic on the floor.

The cross-vendor architecture work is genuinely scarce. It requires software, hardware, and operator-on-floor capability in one person. Most consultancies pick one of those three. Most software vendors can't speak to operators. Most operators can't write a Chrome extension that talks to a Zebra over QZ Tray, or read a CLARiTY schema closely enough to spot a fleet-wide drift that a per-printer diff missed. The gap between those three skill sets is where I work.

Schedule a 30-minute call

Thirty minutes. No slides. If an engagement does not make sense, I will tell you on the call.