Building Software as a Blind Developer
In late 2024 I lost my vision. Here is what my development workflow actually looks like now, and why losing sight made me a better engineer.
In late 2024, I lost my vision.
Not gradually, not as a slow fade I could prepare for. It happened fast enough that I had to rebuild how I worked from nearly scratch. I had been building software professionally for years — writing code, reviewing pull requests, navigating terminals and IDEs. All of that relied on sight in ways I had never thought to question.
This is not an inspirational story about overcoming adversity. It is a practical account of what changed, what I had to learn, and what that experience permanently altered in how I think about building software.
What the First Weeks Actually Looked Like
The immediate problem wasn't motivation or mindset — it was tooling. Every tool I used assumed I could see. My code editor. My terminal. My browser. My GitHub workflow. All of it.
VoiceOver on macOS became my primary interface. If you haven't used a screen reader, the mental model is roughly this: instead of scanning visually, you navigate sequentially. You hear one thing at a time. Navigation becomes deliberate. You build a mental map of interfaces because you cannot glance at them.
The first thing I discovered is that most software is built by people who have never used a screen reader for more than five minutes. Unlabeled buttons everywhere. Modals that trap focus. Keyboard navigation that works in theory but breaks in practice. Dropdowns that screen readers announce as "button" and nothing else.
I had shipped software with these exact problems. The realization was uncomfortable.
My Current Workflow
Here is what a typical development session looks like now.
VoiceOver + Terminal: I work primarily in the terminal. It is inherently text-based and navigable. Bash, git, npm — all of it works well with a screen reader when you understand how to configure verbosity and output.
VS Code with accessibility mode: VS Code has made significant investments in screen reader accessibility. I use it in accessibility mode, which changes how the editor announces content and how you navigate. It is not perfect, but it is workable. I use vim-style keybindings heavily — keyboard-first navigation is non-negotiable.
AI-assisted development: This is where things changed most substantially. I use Claude Code as my primary pair programmer. Not because I couldn't write code without it, but because the feedback loop with a screen reader is slower. Having an AI that can read a file, explain what it does, propose a change, and execute it — that removes a significant amount of navigation overhead. I review the diffs. I make the calls. But the mechanical work of finding and editing specific lines is faster with AI assistance.
Browser testing: I use Safari for development testing because VoiceOver integration is tightest on Apple's own browser. When I'm building UI, I test it with VoiceOver before I consider it done. Not as a checkbox — because I use the thing I'm building.
What Changed About How I Build
Accessibility is no longer a feature I consider during a polish pass. It is baked into the first decision.
When I design a component, I think about keyboard navigation before I think about hover states. When I write a button, I think about its accessible label before I think about its color. When I build a form, I think about error announcements before I think about visual validation indicators.
This is not altruism. It is self-interest. I use the software I ship.
The practical effect: every product I build now has better keyboard navigation, better screen reader compatibility, and better focus management than anything I shipped before 2024. Not because I am a better person, but because I am a user of my own tools in a way that enforces those standards.
The Broader Point
There is a version of this post that makes a grand argument for accessibility as a moral imperative. That argument is true, and plenty of people have made it better than I can.
The argument I want to make is simpler: building accessible software is just building software correctly.
An interface that relies on mouse hover to reveal information is broken — not for blind users specifically, but for anyone on a keyboard, anyone on a touch device, anyone using a screen magnifier. An unlabeled icon button is a usability failure for everyone who doesn't already know what it does. Focus management that traps you in a modal is a bug, not a feature gap.
The WCAG guidelines are not a compliance checklist. They are a specification for software that works. When I read them now, they read like engineering requirements — clear, testable, motivated by real failure modes.
What I Build With
For anyone curious about the actual toolchain:
- VoiceOver (macOS) — primary screen reader
- VS Code — editor, accessibility mode enabled
- Claude Code — AI pair programmer for development
- Safari — primary browser for testing VoiceOver compatibility
- Playwright — automated accessibility testing in CI
- axe-core — accessibility rule engine for component testing
I still write code every day. The stack changed. The output didn't — if anything, it's better.
If you're building software and you haven't spent thirty minutes navigating your own product with a screen reader, I'd encourage you to try it. Not to feel bad about what you find, but because what you find will make you a better engineer.
The problems are fixable. Most of them are obvious once you know to look.