Back

Your feasibility study should be a design tool

Your feasibility study should be a design tool

Team Arcol

How long does a feasibility study take at your firm? Two weeks? Four? Six, if the team is juggling multiple projects?

The real question is not how long it takes. It's how many times you can afford to run one. For most firms, the answer is once. Maybe twice if the client pushes back on the first result. And that's the problem.

A feasibility study should not just be a deliverable. It should be a tool. Something that evolves with the design instead of freezing it in place. The firms that figure this out will make better decisions, win more work, and stop betting entire projects on a single scenario.

The feasibility study as a one-time event

The way most firms run feasibility is expensive by design.

The team assembles the data. The massing is modeled in one tool. The zoning is checked manually or through a separate compliance review. Any project data is run in a spreadsheet. The whole thing gets compiled into a PDF, presented to the client, and treated as the answer.

It takes weeks because the process is fragmented. Each layer of intelligence lives in a different tool, managed by a different person, on a different timeline.

Because it's expensive to produce, it's precious. Because it's precious, firms do it once and often commit to whatever it shows.

This is the structural problem. The feasibility study is treated as a milestone, a discrete deliverable that marks the end of a phase. Not as an ongoing process of inquiry that should continue throughout early design.

What you lose when feasibility is a one-shot exercise

One feasibility study tests one scenario. One massing, one unit mix, one structural approach, one set of assumptions. If it pencils, the team moves forward. If it doesn't, maybe they adjust and run it again.

But the scenario they tested is almost never the best scenario. It's the first one they had time to study.

What about the massing that's two floors shorter but uses a wood-frame hybrid instead of concrete? What about the unit mix that sacrifices five units but adds ground-floor retail that changes the financing? What about the site orientation that costs more to build but reduces the energy load enough to qualify for a different incentive structure?

Those scenarios go untested. Not because anyone decided they weren't worth exploring. Because the feasibility process is too slow and too expensive to explore them. The firm tested one scenario deeply instead of testing 10 scenarios quickly. They committed to the first answer that penciled instead of finding the best answer among several that could pencil.

McKinsey's research on capital project performance consistently points to early-stage decision quality as the highest-leverage variable. The decisions made in feasibility and schematic design lock in 80% of a project's cost trajectory. When firms test one scenario and commit, they're locking in a cost trajectory based on the first viable option, not the optimal one.

What changes when feasibility is fast

Imagine a different workflow. The designer opens a site. Models a massing in minutes, not days. The tool already knows the zoning envelope, so the massing respects FAR, setbacks, and height limits as it's drawn. Cost intelligence is embedded in the geometry, so the designer sees a cost-per-square-foot estimate updating as the floor plate takes shape. The unit mix is adjustable, and the pro forma responds to every change.

This is not hypothetical efficiency. This is what happens when the design environment carries cost, zoning, and structural intelligence natively. When the feasibility study is not a separate exercise assembled across five tools and three teams, but a property of the design session itself.

Model a massing, see if it pencils. Adjust the unit mix, see the economics shift. Change the structural approach, see the cost per square foot respond. The study is not something you produce. It's something you do, continuously, as the design evolves.

The compounding advantage

The difference between testing one scenario and testing 10 is not a 10x improvement in efficiency. It's a categorical difference in the quality of decisions.

Firms that iterate on feasibility quickly don't produce better studies. They produce better project outcomes. They explore scenarios that other firms miss because those firms only had time for one. They find the massing that nobody would have guessed at the outset. They discover that the unconventional structural approach actually pencils better. They identify the unit mix that makes the financing work for the developer and produces a better building for the residents.

The tool has to match the ambition

The reason firms don't run feasibility 10 times is not lack of desire. It's that the tools make it impossible. When cost, structure, and zoning live in separate tools, each requiring manual input and expert interpretation, the cycle time for one feasibility pass is measured in weeks. Telling a firm to "iterate faster" without changing the tool environment is telling them to run faster in quicksand.

Connected Constructible Design collapses this problem. In Arcol, the design environment is the feasibility environment. Zoning intelligence is embedded in the model. Cost updates as geometry changes. Structural feasibility evaluates in real time. The designer is not assembling a study from parts. They're designing, and the study assembles itself around every move they make.

That's the shift. The feasibility study stops being a deliverable that takes weeks and starts being a continuous process embedded in every design session. The firm doesn't schedule a feasibility phase. They make feasibility a property of how they work.

The firms that iterate will win

The feasibility study should not be a document you produce at the end of a phase. It should be something you do every time you make a design decision. Every massing adjustment, every unit mix change, every structural exploration should carry the same intelligence that a traditional feasibility study provides.

That only works when cost, zoning, and structural data are native to the design environment. Not imported. Not estimated after the fact. Present in the model, updating as the model changes.

The firms that adopt this approach will explore more, commit smarter, and present to clients with a depth of analysis that one-shot feasibility can never match. The rest will keep spending weeks to test a single scenario and hoping it was the right one.

Better decisions come from better iteration. And better iteration starts with tools that are fast enough to make it possible.