Aptora’s legacy platform was built around a large, mature VB6 codebase and a product ecosystem where multiple applications interacted directly with shared data and duplicated business logic. As the company started planning its next-generation direction, I took on a dual role as Lead Engineer and Scrum Master to help create a scalable technical path forward and a delivery system capable of executing it.
“The goal was not a rewrite for its own sake. The goal was to reduce risk, eliminate duplicated logic, and make it possible to ship new products without fear.”
Scope of the Modernization Effort
The Aptora 360 initiative was not a single rewrite or feature delivery. It was a coordinated effort to modernize a long-lived platform across architecture, delivery process, security, and team structure.
- Transitioned a tightly coupled VB6 ecosystem toward an API-first architecture to eliminate duplicated business logic and reduce cross-product risk.
- Introduced shared services so core rules could be implemented once and safely consumed by multiple products.
- Established consistent authentication and authorization using an IdentityServer-based approach, enabling SSO, 2FA, and secure API access.
- Created a clear path for future integrations, including the ability for external developers and partner systems to build on top of the API.
- Prototyped and validated a shift from desktop-first UI delivery to a Blazor-based web front end, significantly reducing infrastructure costs tied to VM-based deployments.
Leadership supported this direction because it directly addressed the core problem: making it safer and faster to evolve the product line without unintended side effects.
Starting Point: High Coupling and Duplicate Logic
Before Aptora 360, the product ecosystem evolved over time into a pattern that made change riskier than it needed to be. Multiple applications were interacting directly with shared data, and key business rules were being re-implemented in more than one place. The system worked, but the cost of change kept increasing.
What “high coupling” looked like in practice
- Multiple products performed direct database reads and writes, which meant small schema changes or edge-case bugs could impact multiple apps at once.
- The same business rules existed in duplicate implementations across products, so “fixing a bug” often meant fixing it in more than one place and hoping nothing got missed.
- Stakeholders would request changes in one product that quietly diverged from how another product behaved, creating inconsistency and support burden over time.
- The blast radius of changes was hard to predict, so teams became understandably cautious. That caution slowed delivery and made refactoring feel dangerous.
The core issue was not “legacy code is bad.” The core issue was that the platform did not have a clean way to express business rules once and reuse them everywhere. That was the main reason I pushed for an API-first approach. It created a single place for critical logic, reduced duplication, and made changes safer because products stopped reaching directly into shared data with their own interpretations of the rules.
Inflection Point: When a Straight Conversion Was Not Enough
Early in the modernization discussion, one proposed path was to contract a third party to perform a direct conversion of the existing VB6 codebase into a more modern desktop framework. On paper, this appeared to be a lower-risk, faster option than rethinking the system.
When a sample of the converted code was delivered for evaluation, it quickly became clear that this approach would not solve the underlying problems. The sample faithfully reproduced the legacy structure, but it also carried forward the same tight coupling and complexity that made the original system hard to evolve.
What the conversion sample revealed
- The resulting code was difficult to reason about, with heavy interdependencies carried over from the legacy implementation.
- Errors and inconsistencies were already present in the sample, despite it covering only a small portion of the overall system.
- The converted structure made it clear that future changes would still require touching many unrelated areas of the codebase.
- Rather than reducing risk, the approach would lock the company into a new version of the same problems.
This review became an important decision point. A straight conversion might have delivered something that compiled and ran, but it would not have meaningfully improved maintainability, safety, or long-term velocity.
I recommended stepping back and addressing the core architectural issues instead of copying them forward. That meant introducing clear boundaries, centralizing business logic behind APIs, and treating modernization as an opportunity to reduce coupling rather than preserve it. Leadership agreed with this assessment, and it set the direction for the Aptora 360 effort that followed.
API-First Architecture: One Source of Truth
Once we stepped away from a straight code conversion, the core architectural goal became clear: business rules should be implemented once and consumed everywhere. An API-first approach provided a clean way to enforce that principle while reducing risk across the product line.
Prior to this shift, multiple applications interpreted the same rules in their own ways by accessing shared data directly. Even small changes could produce unintended side effects in products that were not actively being worked on. APIs created a clear boundary where behavior could be defined, validated, and evolved deliberately.
What the API layer standardized
- Core business operations exposed through well-defined REST endpoints rather than ad-hoc database access.
- Consistent request and response models so products interpreted data the same way.
- Centralized validation and rule enforcement, ensuring behavior remained consistent regardless of which product initiated an action.
- A clear ownership model: changes to rules happened in the service, not scattered across multiple client implementations.
This structure immediately reduced duplication and made the system safer to modify. Instead of asking “what else might this break,” teams could reason about changes in terms of a single service boundary.
Designing for growth, not sprawl
- Services were organized around cohesive domains, not individual screens or features.
- APIs were designed with forward compatibility in mind to avoid breaking consumers unnecessarily.
- Shared patterns for error handling and validation made failures easier to diagnose across products.
Beyond internal use, this approach also created a foundation for future integrations. External systems and partner developers could interact with the platform through the same APIs used internally, rather than requiring special-case logic or direct database access.
From a leadership perspective, the biggest win was confidence. Centralizing logic behind APIs significantly reduced fear around change. Teams could move faster because they understood where behavior lived and how it was consumed, and leadership could support new initiatives without worrying about cascading side effects across the product suite.
Identity and Access: A Consistent Security Model
As the platform moved toward an API-first architecture, it became clear that authentication and authorization needed to be treated as a shared platform concern, not something each product handled independently.
Previously, authentication logic varied by application, which made it harder to reason about access, user identity, and security boundaries across the ecosystem. Introducing a centralized identity solution created a single, consistent model for how users and systems interacted with the platform.
What the identity layer enabled
- Single sign-on across products, reducing friction for users moving between applications.
- Support for two-factor authentication to improve account security without requiring each product to implement it independently.
- A unified approach to API authentication, so services and clients followed the same security model.
- Clear separation between user identity, permissions, and application logic.
From an architectural perspective, this meant that APIs no longer needed to make assumptions about where requests originated. Each request arrived with a well-defined identity and set of permissions, allowing services to focus on enforcing business rules rather than re-implementing security concerns.
Designing for internal and external consumers
- Internal products and future external integrations authenticated using the same mechanisms, reducing special cases.
- Authorization scopes and roles provided a clean way to limit access without tightly coupling APIs to specific products.
- The identity model supported gradual expansion of the platform to partners and third-party developers.
This consistency simplified both development and operations. New services inherited a proven authentication model by default, and changes to security behavior could be managed centrally instead of being rolled out piecemeal across multiple applications.
More importantly, it reinforced trust in the platform. Stakeholders could support opening the system to new products and integrations knowing that access control was intentional, auditable, and designed as part of the platform rather than an afterthought.
Front-End Direction: From Desktop to Blazor
Alongside the API-first shift, we faced an important question: should the next-generation UI remain desktop-first, or should we move toward a web front end that could scale more easily and lower operational overhead? I built a rapid proof-of-concept to validate a Blazor-based approach and give stakeholders something concrete to evaluate.
The legacy ecosystem relied heavily on desktop applications, and in some cases required running the software on virtual machines for users. That approach was functional, but it was costly and operationally heavy. Moving toward a web UI dramatically improved the deployment story and reduced infrastructure complexity.
Why web was the right direction
- Lower infrastructure costs by reducing reliance on VM-based desktop delivery.
- Simpler deployments and faster iteration cycles compared to shipping and supporting desktop builds.
- Broader accessibility: users can work from more environments without complex installation and setup.
- A cleaner separation between UI and business logic by consuming the same APIs used across the platform.
The proof-of-concept was intentionally scoped to answer the real questions stakeholders had: feasibility, performance, user experience, and how the UI would interact with the new service layer. Once those concerns were resolved with working software, it became much easier to align on the direction and commit to it.
Front-end architecture patterns I established
- A strategy for shared UI components so common patterns were implemented once and reused consistently.
- A data service pattern for API consumption, keeping HTTP concerns, serialization, and error handling out of UI components.
- Clear conventions for state, validation, and component boundaries to keep the UI maintainable as more developers contributed.
- A clean separation between presentation and domain behavior so the UI remained testable and easier to evolve.
Beyond the technical benefits, this change helped align the modernization effort with the business. The platform could evolve faster, onboarding new functionality became simpler, and the operational cost of supporting the system decreased compared to the VM-heavy desktop approach.
I also mentored other developers on the new patterns and built additional targeted proof-of-concepts to answer feasibility questions quickly, reducing uncertainty and keeping momentum high.
Team and Delivery System: Making Modernization Executable
The scale of the Aptora 360 effort made it clear that architecture alone would not be enough. The way work flowed through the team needed to change so modernization could happen predictably without constant interruption or shifting priorities.
I became the team’s official Scrum Master and earned my PSM I certification so I could lead the delivery transformation alongside my role as Lead Engineer. The goal was not “process for process’ sake,” but a system that reduced chaos and allowed engineers to focus on building.
From ad-hoc work to a clear operating model
- Established sprint planning, backlog refinement, sprint reviews, retrospectives, and daily standups.
- Created a single product backlog in Azure DevOps, owned and prioritized by the Product Owner.
- Defined a clear interface for stakeholders: requests flowed through the Product Owner instead of directly interrupting developers.
- Introduced estimation and capacity planning using planning poker to make delivery more predictable.
Prior to this change, work was highly siloed and reactive. Developers were often pulled in different directions, priorities shifted frequently, and it was difficult to tell what would actually be delivered in a given time frame. Over the course of several months, the team transitioned to a more stable cadence with clearer ownership and expectations.
Quality gates, CI/CD, and environments
- Introduced pull request requirements with at least one senior engineer approval.
- Integrated unit testing into the pipeline and blocked merges when tests failed.
- Set up CI/CD pipelines in Azure DevOps to standardize builds and deployments.
- Established dev, staging, and production environments with clearer promotion and approval flow.
- Defined a shared Definition of Done so quality expectations were explicit.
QA was previously inconsistent and often came in late. I worked to embed QA directly into the Scrum team so testing happened continuously instead of at the end. QA also contributed documentation that supported story creation and reduced ambiguity during implementation.
Team changes, hiring, and mentorship
- Helped rebuild capacity after team attrition by hiring offshore developers and a new senior engineer.
- Mentored developers on the new architecture, tooling, and delivery expectations.
- Created scaffolding and templates to enforce architectural consistency and speed up onboarding.
- Used code reviews and pairing to maintain quality across a distributed team.
The transition took time and required buy-in, but resistance gradually softened as the benefits became visible. Productivity increased, quality improved, and the team shifted from reacting to requests to delivering against a clear plan.
This delivery system made it possible to take on a large, multi-phase modernization effort without burning out the team or sacrificing quality. It also created a foundation the organization could continue to build on after the initial platform work was complete.
Scaling the Team: Hiring, Mentorship, and Leverage
The Aptora 360 effort did not start with a fully staffed or stable team. Over time, we experienced attrition through retirement, departures, and performance-based exits. That meant modernization work had to continue while the team itself was being rebuilt.
For a period, the core execution group consisted primarily of myself and one other senior engineer, supported by offshore developers. This made leverage, clarity, and consistency critical. Scaling output required more than adding people. It required systems that helped new contributors be effective quickly.
Hiring and onboarding
- Helped source and hire offshore developers through Upwork to restore capacity during the modernization effort.
- Participated in hiring a new senior engineer to help lead execution and share architectural ownership.
- Established clear expectations around code quality, pull requests, testing, and delivery cadence from day one.
Onboarding focused less on explaining every part of the system and more on teaching patterns. Developers learned where logic lived, how APIs were structured, and how changes flowed through the platform. This reduced the need for constant supervision and allowed contributors to work independently sooner.
Mentorship and consistency
- Mentored developers on API design, data access patterns, and front-end integration.
- Used pull requests and code reviews as teaching tools, not just approval gates.
- Created scaffolding that generated consistent API structure so new services followed established architectural patterns by default.
- Focused developer effort on business logic and data modeling rather than repetitive setup work.
This approach allowed a relatively small team to make steady progress on a large modernization effort. By combining hiring, mentorship, and tooling, we reduced variability in output and made it easier for new contributors to align with the platform’s direction.
More importantly, it ensured that architectural decisions did not live only in my head. Patterns, conventions, and expectations were encoded in code, pipelines, and documentation so the team could continue building even as people changed.
What This Work Represents
Because Aptora 360 functioned as both a field service platform and a financial system similar to QuickBooks, correctness, auditability, and controlled change were treated as first-class concerns throughout the modernization.
The Aptora 360 modernization was not a single rewrite or technology swap. It was an effort to make a mature, revenue-critical platform safer to evolve by reducing coupling, clarifying ownership, and building systems that could scale beyond any one person.
From an engineering perspective, the focus was on creating clear boundaries: APIs instead of shared database access, centralized identity instead of ad-hoc authentication, and shared front-end patterns instead of duplicated UI logic. These decisions reduced risk and made it possible to move faster without fear of unintended side effects.
From a delivery perspective, the work required establishing a predictable operating model. Introducing Scrum, CI/CD pipelines, embedded QA, and clear quality gates gave the team the structure needed to take on a large, multi-phase modernization effort while continuing to support the existing business.
Just as importantly, this project reinforced a lesson I carry forward: meaningful technical change requires buy-in. Architecture alone is not enough. Progress comes from explaining tradeoffs clearly, building trust through working proof-of-concepts, and helping people see how change improves their day-to-day work.
If you’d like to talk through any part of this effort — architectural decisions, delivery tradeoffs, or what I would refine with hindsight — I’m happy to go deeper.