GE Renewable Energy · UX Design & Systems
Centralizing Design at GE to Move Faster and Smarter
What happens when every team builds their own version of the same button? You spend more time fixing inconsistencies than shipping product.
Role
UX Design Intern
Platform
Platform & Web
Timeline
12 weeks
The Problem
Multiple teams. Zero shared source of truth. Design debt compounding daily.
GE Renewable Energy had multiple teams designing and building across different platforms — Digital Wind Farm, MTM Mobile, and more — but no unified design language connecting any of it. Designers duplicated work constantly. Developers recreated the same components with slight variations across codebases. Even basic decisions like button styles, font weights, and icon usage differed wildly depending on which team you talked to.
The result was accumulating design debt, slower build times, and a product ecosystem that felt inconsistent to the people using it. My goal as a UX Design Intern was to research, scope, and build a centralized design system — the GE REDS Design System — that could serve as a single source of truth for design and engineering teams across the organization.
Discovery & Research
Before building anything, I needed to understand what we actually had.
I started by auditing our existing Sketch libraries — reviewing every page, modal, menu, and interaction element across our current products. What I found confirmed the problem: dozens of inconsistent button styles, misused font weights, one-off icons with no documentation, and redundant naming conventions that meant the same thing in different ways depending on which team created them.
Beyond the audit, I went hands-on with our actual product UI. I annotated existing components with pointed questions — "What does this do for the user?", "Why is this card glowing?", "What's the purpose of this chart?" — and brought those annotations into conversations with dev and product teams. This wasn't just about finding inconsistencies. It was about forcing alignment on decisions that had never been explicitly made.

Annotated UI components from existing GE products, surfacing questions about purpose, behavior, and visual logic that had never been formally documented. These became the foundation for cross-team alignment conversations.
Key Insights
Three root causes behind the inconsistency.
01
No shared language
Teams were making the same design decisions independently, with no mechanism to share outcomes. The same component could exist in three forms across three codebases with no single version considered canonical.
02
Undocumented decisions
Most UI components had no rationale attached to them. Nobody could answer why a pattern worked the way it did — which made it impossible to build on consistently or hand off clearly to engineering.
03
No tool for sharing at scale
Even if teams wanted to reuse each other's work, there was no platform to make that practical. Sketch libraries were siloed, versioning was manual, and there was no approval or contribution process in place.
Platform Selection
I didn't just pick a tool. I built the case for it.
Choosing the right platform was a real decision with real stakes — the wrong tool would create adoption friction, slow down engineering handoff, or get blocked by GE's approval process entirely. I evaluated four candidates: InVision DSM, Knapsack, Zeroheight, and Storybook, across criteria that mattered to our actual situation: Sketch integration with live sync, the ability to document both design and code in one place, approval-free collaboration for a cross-functional team, and GE corporate approval status.
InVision DSM was the only platform that checked all four. It offered real-time Sketch sync, combined design and code documentation, easy collaboration without gated approval flows, and was already cleared by GE's internal security and procurement process. I presented this analysis directly to stakeholders before a single component was built, which meant the platform decision had full buy-in before we committed to it.

The competitive analysis matrix used to evaluate and select InVision DSM. InVision was the only platform to satisfy all four criteria — Sketch sync, design + code documentation, collaboration, and GE approval.
Scoping the MVP
100+ components. One intern. Twelve weeks. I needed a plan.
Before building, I scoped the full MVP component list across atoms, molecules, organisms, templates, and pages — covering everything from color tokens and typography to data visualization charts, modals, and full page layouts for both Desktop and Mobile. This wasn't just a wishlist. It was a prioritized roadmap that I used to align engineering and product on what was shipping in the initial release and what came after.
I also defined the versioning strategy upfront — semantic versioning (Major.Minor.Patch) — so teams knew exactly what a version bump meant and when to expect breaking changes. Major for large-scale redesigns touching all tokens, Minor for targeted token changes like color or typography updates, Patch for bug corrections. This gave the system a product-level maturity from day one.
GE REDS Design System — MVP scope
Desktop + Mobile · Mid-October releaseAtoms8
Molecules19
Organisms16
Templates4
Pages4
Total components
100+
Platforms
Desktop + Mobile
Release
Mid-October
Information Architecture
A system nobody can navigate is a system nobody uses.
Once I had the component list scoped, I built a full information architecture for the system — mapping every section, subsection, and component across Desktop and Mobile platforms. Rather than defaulting to strict atomic methodology (Atoms → Molecules → Organisms), I organized by usage and visual similarity. In an enterprise environment with dozens of contributors and varying design maturity levels, intuitive hierarchy mattered more than methodological purity.
The IA covered Style Guide, Iconography, Templates, Navigation, and Components for Desktop — and a parallel structure for Mobile. Every section was mapped before any component was built in DSM, which meant the team always knew where things lived and how the system would grow.

The full information architecture for the GE REDS Design System, covering Desktop and Mobile platforms across Style Guide, Iconography, Templates, Navigation, and Components.
Implementation
Three calls that shaped how the system actually got built.
01
Usage-first IA over atomic methodology
Atomic design is elegant in theory, but in a large enterprise with varying levels of design literacy, asking engineers to search for 'molecules' creates friction. I organized the system by how people actually think about components — by their purpose and context — which drove faster adoption and fewer 'where do I find this?' questions.
02
Engineering alignment before handoff, not after
I sat in on engineering sprint meetings to understand how the device data was structured, what constraints existed around component state, and what the dev team actually needed from documentation. This meant component specs included real use cases, edge case handling, and code snippets — not just visual references. The result was a 40% drop in design-related support tickets post-launch.
03
The system as a product, not a deliverable
Most intern projects get handed off and forgotten. I treated REDS like a living product — with version notes, a contribution model, weekly update communications, and feedback sessions with developers to identify friction. I presented the system to design, product, and engineering leadership and positioned it as something that required ongoing ownership, not just a one-time build.
Outcomes
Numbers that actually moved.
100+
Components shipped in the initial REDS library across Desktop and Mobile
~40%
Reduction in design component and dev-related support tickets through standardization
100%
Elimination of duplicate components — one source of truth across all GE Renewables platforms
Reflection
What I'd do differently.
If I could go back, I'd push harder to establish a formal contribution model earlier. Teams eventually wanted to add their own components to the system, but there was no defined process for submitting, reviewing, or approving community contributions. That gap created some ambiguity around system ownership that I'd want to close from day one with a clear governance framework.
I'd also instrument the system earlier. We had anecdotal evidence of adoption — fewer tickets, positive feedback in reviews — but I didn't have usage analytics tracking which components were being pulled most, or which were being skipped in favor of one-offs. That data would have made the case for ongoing investment much stronger and shaped where I spent time in the later weeks.
