a Trace in the Sand

by Ruth Malan





February 2019


2019 OReilly Software Architecture Conference NYC: Visual Design and Architecture

[in process] Annotated Presentation Slides

Visual Design and Architecture Cover SLide


Part iii: Visual Design of System (Internals)

Note: This is a work-in-progress. More to come. If this is useful, saying so will help give impetus to this "continuous delivery" writing project :-)

System design is about coherence

First, to set context, let's level set on some terms. System design is about coherence and integrity. Recall: when we're talking about integrity we mean: structural integrity (robustness and resilience); organizational integrity and ethics; and design integrity and coherence and fit. Fit to context and to purpose; internal fit and conceptual integrity.

Architecrture is elements and relations

We've talked about software architecture as "the significant design decisions that shape a system, where significant is determined by cost of change" (Grady Booch). Another influential characterization comes from the team at the SEI: "Software architecture refers to the high level structures of a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations." (Clements et al.)

What Agile is Not

What is agile? One way to understand or characterize what something is, is to characterize what it is not. Not agile is unresponsive or unchanging in the face of change. It is rigid, and resistant to changes in direction.

Not good architecture: entanhled

Similarly, to characterize good architecture, we consider what characterizes a not good architecture: a "big ball of mud" springs to mind. When you have a "big ball of mud," as Michael Stahl vividly put it, "you reach for the banana, and get the entire gorilla." Entanglement mires and makes a system hard to change.

“If you think good architecture is expensive, try bad architecture.” – Brian Foote and Joseph Yoder

Dijkstra: keep it disentangled



A modular structure reduces cost of change by (and to the extent that it achieves) isolating change, shielding the rest of the system from cascading change. In a modular approach, parts of the system that are unstable, due to uncertainty and experimentation to resolve that uncertainty, can be shielded from other, better understood and more stable parts of the system. Parts can be plugged in, but removed if they don't work out, making for reversibility of decisions that don't pan out. They can be replaced with new or alternative parts, with minimal effect on other parts of the system, enabling responsiveness to emerging requirements or adaptation to different contexts. Further, it's a mechanism to cope with, and hence harness, complexity. Partitioning the system, reduces how much complexity must be dealt with at once, allowing focus within the parts with reduced demand to attend (within the part) to complexity elsewhere in the system. We give a powerful programmatic affordance a handle with minimal understanding to invoke it, and can selectively ignore (not all the time, but for a time), or may never even need to know (it depends), its internals.

In a modular approach, parts of the system that are unstable, due to uncertainty and experimentation to resolve that uncertainty, can be shielded from other, better understood and more stable parts of the system. 

“ Uncertainty is not a license to guess. It is a directive to decouple." —  Sandi Metz

Parts can be plugged in, but removed if they don't work out, making for reversibility of decisions that don't pan out. They can be replaced with new or alternative parts, with minimal effect on other parts of the system, enabling responsiveness to emerging requirements or adaptation to different contexts. 

Further, it's a mechanism to cope with, and hence harness, complexity. Partitioning the system, reduces how much complexity must be dealt with at once, allowing focus within the parts with reduced demand to attend (within the part) to complexity elsewhere in the system. We give a powerful programmatic affordance a handle with minimal understanding to invoke it, and can selectively ignore (not all the time, but for a time), or may never even need to know (it depends), its internals.

“There was a wall. It did not look important. It was built of uncut rocks roughly mortared. An adult could look right over it, and even a child could climb it. Where it crossed the roadway, instead of having a gate it degenerated into mere geometry, a line, an idea of boundary. But the idea was real. It was important. For seven generations there had been nothing in the world more important than that wall.
Like all walls it was ambiguous, two-faced. What was inside it and what was outside it depended upon which side of it you were on.” — Ursula K. Le Guin 
“Encapsulation is important, but the reason why it is important is more important. Encapsulation helps us reason about our code.” — Michael Feathers

Architect's SCARS

In talks on software architecture, Grady Booch points [minute 43+] to what he characterizes as "the fundamentals that remain fundamental":

  • clear Separation of Concerns
  • crisp and resilient Abstractions
  • balanced Responsibilities
  • Simplicity

which I resequenced as shown above, to give us the mnemonic, SCARS* (see The Architect's Clue Bucket, slide 16). Which is appropriate, since SCARS are what we get from experience.

Follow natural structure

Taking the first of the SCARS (Separation of Concerns), here's a story from the Tao. It is the story of the dextrous butcher. In one translation, the master cook tells us: 

“It goes according to natural laws, Striking apart large gaps, Moving toward large openings, Following its natural structure.
A good cook goes through a knife in a year, Because he cuts. An average cook goes through a knife in a month, Because he hacks.
I have used this knife for nineteen years. It has butchered thousands of oxen, But the blade is still like it’s newly sharpened.
The joints have openings, And the knife’s blade has no thickness. Apply this lack of thickness into the openings, And the moving blade swishes through, With room to spare!
That’s why after nineteen years, The blade is still like it’s newly sharpened.

Nevertheless, every time I come across joints, I see its tricky parts, I pay attention and use caution, My vision concentrates, My movement slows down.”

What do we extract, that helps guide us in architecting? Two heuristics jump out:

  • follow the natural structure
  • when we come to the tricky parts, slow down

This story is used as a teaching story in diverse situations. And yet here it is, yielding heuristics for architecting, which is, after all, at least in part about (de)composing a coherent system. We might challenge the heuristic with: what, in so conceptual a matter as a system formed of thought, written as code, is "natural structure"?

One place to go, in looking for the natural topology, to find the natural shape to follow in creating internal system boundaries, is the "problem" domain — that is, the domain(s) being served by the system(s) we're evolve-building. This brings us to bounded contexts and heuristics and guidance in Domain-Driven Design to suggest system and component or microservice boundaries. And to business capabilities in Enterprise Architecture/Business Architecture approaches, that seek to understand the topology, the shape and relatedness, of business capabilities.

Another place to go is back to 1989 and Ward Cunningham and Kent Beck's CRC cards, but repurposed for components, along with a heuristic that takes the form of a neat antimetabole (thanks to Jan van Til's paraphrase of a Tom Graves point): "The responsibility of architecture is the architecture of responsibility." Which points us in the direction of identifying responsibilities related to system capabilities and properties, and the arrangement of responsibilities. Separating responsibilities along the lines of concerns. [This is not the only way we use the term "separation of concerns' in software design, of course.]


We use abstraction in different ways — levels of abstraction, abstractions as elements of code we work with, the activity of abstracting. J. Edsger Dijkstra (The Humble Programmer, 1972) noted:

“The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise.”

The abstractions we use to give our system internal form, must, as Michael Feathers points out, be invented. They are conceits. In every sense of the word, perhaps. But they begin as conceptions.

Finding abstractions

How do we come up with these abstractions, these conceits?

"To find the right abstraction, guess. If it exhibits the right properties, stop. "— Jessica Kerr

We're looking for heuristics for identifying architectural elements (or components, or microservices if those are the architectural elements of the system in question). A crisp abstraction has a clear, unifying purpose or cohesive identity; it has, in effect,

a single responsibility at the level of abstraction of the abstraction

This suggests a focus on responsibilities as an avenue to system (de)composition. We identify responsibilities and assign to them components, working in either (and both!) direction: start with a first cut notion of components the system will need, and identify and allocate responsibilities to them; or start with responsibilities evident from what the system needs to do, and factor to find components.

Starting point for guesses? System capabilities

So, back to our CaringCircles system.

Components guess

Here we have our first guess. We might note that the hand drawn,handwritten nature of it, is a feature not a bug. For hand drawn conveys incomplete, not done, still in-progress and changeable. It is humble, in the best sense. But given my writing, we can't read it, so...

Guess, redrawn


The dual to separation of concerns is a coalescence or cohesion of concerns?

"Things that are cohesive, [..] naturally stick to each other because they are of like kind, or because they fit so well together.[..] the pieces all seem to be related, they seem to belong together, and it would feel somewhat unnatural (it would result in tight coupling!) to pull them apart"— Glenn Vanderburg

That is, not only are we looking for what to pull apart, but what to keep together to form crisp and resilient abstractions.

Gaps and improvements

Here we have an artificially simplistic clustering (to illustrate the point), putting things to do with requests in a service, and offers in another. However, users entering and updating requests are Requestors (people asking for help, or their proxies helping them do so), while the users for View requests and Claim requests (i.e., signing up to fulfill them) are Caring circle members. This latter is a search and make a manual match feature. Does it make sense to move it to the Matching Service?

Next guess

So we update the responsibilities lists accordingly. Further, checking the use case diagram, we notice that fulfillment is missing and add that. We'll keep iterating as we explore behavior and properties. And likely move Manual match out of the Matching Service. The important thing is that these list of responsibilities help us spot problems and explore ideas, they are an important (re)factoring aid, and we need to keep them up-to-date as we learn more what the systems needs to be(come) and how to sypport that in our code (structure and mechanisims).


We take a guess as a starting point, and improve on it: Run thought experiments and model (use cases or user stories; focus on one property, then another, etc.) to flush out responsibilities we overlooked in our initial guess.

These lists of responsibilities are a powerful and largely overlooked/under used tool in the architect's toolbelt. If the responsibilities don't cohere within an overarching responsibility, or purpose, that should trip the architect's boundary bleed detectors. We may need to refactor responsibilities, and discover new components in the process. We think that "naming things" is the big problem of software development, but "places to put things" is the proximate corollary.

“ Be deliberate and deliberate all the things” —  Dawn Ahukanna

Parnas: start with decisions that are hard or likely to change

As we do this exploration with the aid of models (just as we do when doing design in the medium of code), we're applying heuristics we've developed through experience, and exposure to other people's work (books, and such). Heuristics don't take away the need to think, to reason and try things out. They help us identify what to think about, as we do so, and may suggest how to go about it (better).

In another of the foundational classics of our field, David Parnas (On the Criteria To Be Used in Decomposing Systems into Modules, 1972) proposes:

"that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. "

That's two heuristics, and thanks to Robert Martin, the second has primary place in the SOLID set of principles that have been a touchstone of software design, and object-oriented design in particular:

Single Responsibility Principle (SRP): a class should have one and only one reason to change

Stuart Brand, writing about pace or shearing layers in building architecture in How Buildings Learn, observed that different parts of a structure change at different rates (with the building site, and then the structure, being least likely to change, given the cost and difficulty of doing so; and the stuff inside being the most likely to change). This observation gives rise to the heuristic:

keep things that change together, together

Thus, one thing we're seeking to do with a modular structure and crisp abstractions, is to isolate change, shielding, as best we can, the rest of the system from cascading change. We can check how we're doing against the heuristic, by running anticipated changes across the architecture, informally mapping a new use case on the architecture, to see if responsibilities that are missing illuminate a different clustering/factoring an resulting abstractions. We're not trying to build anticipatory capabilities we're not sure we'll need. We're simply stress testing the resilience of our abstractions, to see if we've missed a more resilient basis for separating and coalescing responsibilities.

Let's return to Parnas; the other heuristic in his criteria for decomposing (already quoted above) is:

"[begin] with a list of difficult design decisions [..] Each module is then designed to hide such a decision from the others ."

interactions with external systems susceptible to change

Here we return to the TechTribes example and consider in particular the interactions involved in retrieving content from other systems (twitter, github, newsfeeds).

And the components

Turning to the Component Diagram, we see components (Twitter Connector, Github Connector and Newsfeed Connector, respectively) serving to protect the core of the system from interactions with the outside.

Cockburn's hexagonal

Which brings to mind Alistair Cockburn's hexagonal architecture pattern. Here, adapters at the system boundary, shield the (core) application (code) from interactions with the outside, keeping business logic uncontaminated by the peculiarities of external agents or systems and their states, interactions and interaction paradigms, and keeping business logic from leaching into user, or other system, interface code. Moreover, ports and adapters are a first mechanism of address for cost of change, partitioning the system boundary into plug-and-play access points.

In the founding classic of system architecture, Eberhardt Rechtin presents heuristics gleaned from his amazing career as a system architect in aerospace, and master teacher of system architects in the seminal program he created at USC. One of these heuristics (is a "turtles all the way down" sort of thing, but applies also at the system level):

"Design things to make their performance as insensitive to the unknown or uncontrollable external influence as practical."

Relatedly, Postel’s Law (also called The Robustness Principle) states:

"be conservative in what you do, be liberal in what you accept from others"

I nickname it the "play well with others" principle. At any rate, concerns at the boundary encompass responsibilities to not just plug and play, but to play well.

So we have this interplay between patterns and heuristics and modeling, where patterns suggest design elements and heuristics help guide our attention and design choices, and notice what to be wary of and so forth.

Steve Jobs: Design across

This heuristic (and admonition, really) from Steve Jobs ("Design is not just what it looks and feels like. Design is how it works."), is not just relevant to design across "the skin and the guts" but within the guts, it reminds us to design across structure and behavior.

Posit structure, and play over the behavior

Design of the system "guts" takes a different kind of attention from design of the system capabilities: we're reasoning about structures (components, interfaces, relationships) and looking for opportunities to reduce dependencies, by exploring options for what we bring together, and what we split apart. That is, we're looking for natural "seams" in the system. We may start with DDD and domain models and bounded contexts. Or with guesses at components arrived at by considering system capabilities (possibly represented as use cases) and responsibilities.

But we're -- importantly -- reasoning about the interaction among architectural elements to yield capabilities (and related value to stakeholders) with desired properties (impacting user-, operations-, and developer-experience, and impacting system integrity and sustainability), addressing the inherent challenges and tradeoffs that arise as we do so. Which means that we're exploring behavior, to direct discovery as we're exploring how to build the system, not just its parts. But this impacts how we conceive of and design the parts.

So we're positing structure, and asking "what is this system made of?," but also -- and soon -- exploring behavior, asking "how will this work?" and "how does this approach contribute to or inhibit the desired system properties (including reversibility and adaptability through modularity and decoupling) and yield needed system behaviors?"

As we do this, we're going to learn more about the responsibilities of the components or architectural elements. And we need to be disciplined about updating the responsibilities and refactoring components/responsibilities to try out alternative structurings, or to improve the current one.

"Structure determines what behaviors are latent in the system." -- Donella Meadows

Iterating Across Structure and Behavior Views

Design across structure and behavior and as do so, interfaces start to be flushed out

As we iterate across structure (what, or elements and relationships) and behavior (how it works), we're improving the Conceptual Architecture -- we find responsibilities or relationships we missed, we might refactor, etc. [If you're looking to map this discussion to Simon Brown's C4 architecture model, we're improving the Component Model] We're also identifying (possible) interfaces (Logical Architecture).

We generally advocate using just enough UML (or sysML) to get the job done. Where just enough may be UML-ish, if that's good enough. UML is a modeling language, and like other languages, its expressiveness is more than we generally need, support and convey our reasoning. At any rate, we postulate an initial structure, and use sequence diagrams or other behavior diagrams to explore not just components and responsibilities, but to start to identify interfaces.

[The diagram above shows a sequence diagram, but we could use systems dynamics models (e.g., Causal loop diagrams, Stock and flow diagrams) and system diagrams (e.g., Senge's systems diagrams; Checkland and Scholes' Rich Pictures) or other behavior diagrams from UML, some of which are explored in what follows, to illustrate.]

Use case and components

In this example, we have the use case diagram and a fragment of the conceptual architecture diagram for a reservation system.

And with behiavor

And here we're playing the MakeReservation use case over the involved components, and updating the initial interfaces for the components in question. We might notice that with a Communication Diagram, we overlay behavior on the topology of the components, allowing us to keep the topology as a constant across Communication Diagrams, so its a stable point of reference.

Sequence diagram

A Sequence Diagram, by contrast, emphasizes timing (well, as indicated by the name, sequence). We find this quite intuitive to create and read. In either case (Sequence or Communication), we're exploring "how it works" -- or could work, and if it becomes part of the design, then how it should work. Activity Diagrams can also be used to explore behavior (components are swimlanes, activities reveal responsibilities). If we're designing an event driven system, we'd want to use Statechart Diagrams. The important thing, especially early, is that we can think through interactions that give rise to relationships (and interfaces) and explore alternatives quickly. Looking across these diagrams, we can check that we have captured the responsibilities and relationships indicated by the messages on the Conceptual Architecture Diagram.

We can use these diagrams to illustrate what we mean or intend. Or to show what the system does, to explain it to new team members or those who will use our APIs.

Whether we're using UML diagrams (so that we don't have to invent notation and document syntax and semantics) or ad hoc diagrams (to just go ahead and express the design thinking as it occurs to us to represent it), we're doing the important work of making our individual and team think visible. And exploring design ideas for the overall system structure and dynamics, or key elements such as the design of architectural(ly significant) mechanisms.

Deployment diagram

While I encourage teams to start with Conceptual Architecture (key abstractions and relationships), we want to soon, and then along the way, capture the constraints of the physical environment in a topology diagram, and investigate implications for our design as we think about distribution and implications of physical boundaries for communication and performance and latency, and other properties being driven by the deployment environment.


Iterative and messy

Iterating across structure and behavior, or what the system (or an archietctural mechanism) is made of and how it works, is a key idea, and it is hard to do with just code as the externalized thinking medium. Alternately put, this is exactly where visuals have their strength, in that they allow us to see, and hence notice what we're missing, relationships and interactions, or states and transitions. So that we can reason about properties and capabilities and develop our "theory of the program" at system design level. It gives us the material to reason more visibly, and sure, we're fallible, but that only means we need to try different means to make our gaps and weaknesses in the design evident as early as we can. Explore alternatives, improve the design. Dive into details where we need to. Write code and build up the system, learning more about the design as we evolve it. But this is a messy process. It is messy so that the end result is less so!

As we draft views of the architecture, and work across them, and as we're building increments and making discoveries, we need to be willing and disciplined about learning even though it means backtracking, and revisiting and revising the architecture. Indeed, in his seminal book on Systems Architecture, Eb Rechtin listed the following among his essential characteristics of system architects:

“High tolerance for ambiguity”

“The willingness to backtrack, to seek multiple solutions”

At any rate, it serves us well to

  • develop an architecture decision record;
  • document the essential structure of the system, with explanations and rationale (tradeoffs made, connecting dots to desired outcomes, etc.); and
  • document key architectural mechanisms (using 2-page spreads such as might be inspired by Thing Explainer (without the simple words constraint) or something like Martin Fowler's explanation of the LMAX architecture. These are nice illustrations of ad hoc visualizations that powerfully help communicate the key design ideas -- and the role that textual description plays.

By advocating for visual design, we're by no means suggesting we shouldn't also be disciplined about the conversations and the textual descriptions. We need to make the architecture vivid and compelling. For even though we work on the architecture together, in the team (or part of the team, for large teams), we're capturing the reasoning and design expression for our future selves, and new team members.

Feeding Learning Back into the Design

Intention and reflection

So we iterate across system views, evolving the architecture. And as we build and deploy more and more of the system, we continue to learn and evolve the design.

Essentially, when we start out, we haven't decided what our elements are, even conceptually. We're just mucking about with ideas for how the system might be organized, toying with ideas for structures and mechanisms, and playing out how different alternatives might work, in as fuzzy and fudgy a way as suits the moment. But we start to tighten down our thinking as we make choices, and as our design thinking matures (meaning we make more decisions and the jell starts to be more viscous) we may start to use more of the modeling power and more support from the tooling.

As we build out system capabilities, it is useful to have support (in tooling like Structure101, Lattix, and Codescene) to show us the shape of the system, and show us "hot spots" of complexity and change, or where the design in the code is departing from the architectural design so we can probe why and if this is a good thing, and so forth. Our tools must not become our masters. But they are useful to see the system, to observe and learn from the design as built -- from the perspective of users of the software, and from the perspective of developers of the code, as well as operations engineers. And we should never forget the usefulness of the pencil, or marker, to draw out our "theory of the system," its realized mechanisms, and how they work, and what the key structures are.

And so, as we evolve the system, we're moving between intention, writing code informed by that intention, but doing further design in the medium of code, using tools and modeling to learn from what we did and how the system behaves, and transferring that learning to evolve the design as intent. Keeping the (minimal necessary; judgment applies) design expressions in pace with the evolving code.

Still, I want to emphasize the learning opportunity we have, when what we have is just design expressions.

Get feedback

We Model: To Test

Models help us try out or test our ideas -- in an exploratory way when they are just sketches, and thought experiments, where we "animate" the models in mind and in conversation. Just sketches, so less is invested. Less ego. Less time.

We sketch-prototype alternatives to try them out in the cheapest medium that fits what we're trying to understand and improve. We seek to probe, to learn, to verify the efficacy of the design elements we're considering, under multiple simultaneous demands. We acknowledge we can misperceive and deceive ourselves, and hold our work to scrutiny, seeing it from different perspectives, from different vantage points but also with different demands in mind. We consider and reconsider our design for fit to context, and to purpose. We evolve the design. We factor and refactor; we reify and elaborate. We test and evolve. We make trade-offs and judgment calls. We bring in others with fresh perspective to help us find flaws. We simulate. We figure out what to probe further, what to build and instrument. We bring what we can to bear, to best enable our judgment, given the concerns we're dealing with.

We humans are amazing; we invented and built all the tech we extend our capabilities with! And fallible; many failures, often costly, got us here, and we're still learning of and from unanticipated side-effects and consequences. Software is a highly cognitive substance with which to build systems on which people and organizations depend. So. We design-test our way, with different media and mediums to support, enhance, stress and reveal flaws in our thinking. Yes in code. But not only in code.

In the Cheapest Medium that Fits the Moment

Along the way -- early, and then more as fits the moment -- we're "mob modeling or "model storming" "out loud" "in pairs" or groups. And all that agile stuff. Just in the cheapest medium for the moment, because we need to explore options quick and dirty. Hack them -- with sketches/models, with mock-ups, with just-enough prototypes. Not just upfront, but whenever we have some exploring to do, that we can do more cheaply than running experiments by building out the ideas in code. We do that too. Of course. But! We have the option to use diagrams and models to see what we can't, or what is hard to see and reason about, with (just) code. Enough early. And enough along the way so that we course correct when we need to. So that we anticipate enough. So that we direct our attention to what is important to delve into and probe/test further.

Whether we are seeking to learn more about users and needs, or more about design options, it is just as well to bear the following in mind (so we try other routes, if they will yield the learning we need, more cheaply):

"Building software is an expensive way to learn" -- Alistair Cockburn

Self-repairing egos

Architecture decisions entail tradeoffs. We try for "and" type solutions, that give us more of more. More of the outcomes we seek, across multiple outcomes. Still, there are tradeoffs -- "[architecting] strives for fit, balance and compromise among the tensions of client needs and resources, technology, and multiple stakeholder interests" (Rechtin and Maier).

Compromises mean not everyone is getting what they want (for themselves, or the stakeholders they see themselves beholden to serving) to the extent they want. Seeing the options and good ideas to resolve (only) the forces from the perspective of (just) a part, may.... lead to questioning design decisions ("throwing darts", in the terms of the cartoon) made from the perspective of the system. It can be hard to see why anyone would give up a good thing at a local level to benefit another part of the system, or to avoid precluding a future strategic option or direction. Sometimes those questions lead to a better resolution. Sometimes they just mean compromise.

Further, complex systems are, well, complex. Many, many! parts, in motion. In dynamic, and changing, contexts (of use, operations, evolutionary development). So there's uncertainty. And ambiguity. And lots of room for imperfect understanding. And mistakes. Will be made. Wrong turns taken. Reversed. Recovered from. We're fallible. So. More darts. Which is humbling -- more so, if we're not humble enough to stay fleet, to try to learn earlier and respond to what we're learning.

All of which means we need to notice what is hard to notice from inside the tunnel of our own vision -- where what we're paying attention to, shapes what we perceive and pay attention to.

Donella Meadows: expose your mental models to the open air

I frequently quote the first of these heuristics ("expose your mental models to the open air"), but they are all important. We might summarize with "be humble." To me, confidence means being willing to act on my judgment, and humility means being willing to find myself wrong. Actionably willing, as in what Donella says -- holding our models loosely; they're just models, and invite (kind, but discerning) scrutiny and alternatives.

Change your PoV

"A change of perspective is worth 80 IQ points" reminds us to take a different vantage point, to see from a different perspective. Consider the system from different points of view; use the lens of various views. This can play out multiple ways, but includes considering the design (structure, dynamics and implications) from the perspective of security, and from the perspective of responsiveness to load spikes, etc.

Another way to ensure a change of perspective, is to get another person's perspective. Invite peers working on other systems, say, to share what's been learned and seek out opportunities and weaknesses, things we missed. Our team can miss the gorilla, so to speak, when our attention is focused on the design issues of the moment. Fresh perspective, and even just naive questions about what the design means, can nudge an assumption or weakness into view. And merely telling the story, unfolding the narrative arc of the architecture to fit this person or audience, then that, gets us to adopt more their point of reference, across more perspectives -- in anticipation, and when we listen, really listen, to their response and questions.

We can understand the system as code, and building up mental models of the structure and "how it works." And we can understand the system (as we envision it, and as we build-evolve it) through sketches or visual models with reasoned arguments, explaining, exploring and defending the design. And we can understand the system through visualization of the system behavior, or through simulation or instrumentation (of prototypes, earlier, and the system in production, soon and then later, as the system is evolved). We can understand the system as perceived by someone new to the system, and as someone comfortable with the mental models we've imbued it with. We can understand the system directly and through the lens of an analogy (or hybrid blend of analogies). Understand it as loss and as gain. As a system we break down into parts, and as a system, or whole.

To change what we see, change where we see from, and what we see through: use different models, and aids to reasoning and seeing our thinking. State charts. Decision tables. Design structure matrices. Impact tables.

Jerry Weinberg: three alternatives

Fred Brooks wrote "Plan to throw one away. You will, anyway." I'd say: that too, but plan to throw several away — on paper. It's quick and cheap. Use rich pictures, use case and component diagrams and play over behavior of interest — repeat. Use other views. Do this at system level early to clarify direction worth taking/starting out in. But continue to do this (with just enough sketches and modeling focused on the concerns at hand) as challenges present themselves; some of these arise in the use context; some are internal “life sustaining” mechanisms that the system needs for it to be/become the kind of system it is (meet its persistence and dynamic data delivery/consistency needs; meet its scaling demands for spikes and growth; etc.). At any rate; "plan to throw some away," needs to include sketchprototypes. We need to try out alternatives in the cheapest medium we can learn more in; sometimes that's code, but not if a sketch will do. We don't learn at the granularity we learn when we learn in the medium of code, but we at least start to try ideas out, and explore and bat at them, investigate how they could work, in sketch-driven-dialog.

Three possibilities? For everything? That smacks of BDUF FUD (fear, uncertainty and doubt)?? Can't we just YAGNI that? Well, remember, these are make or break decisions. Game shapers and game changers. Still. We have more wisdom to call into play, this time Bucky Fuller's, and what Dana Bredemeyer calls the "extraordinary moment principle":

"What, at this extraordinary moment, is the most important thing for me to be thinking about?" -- Buckminster Fuller

Architectural judgment factors.

“If you haven’t considered more than one alternative, you’re not designing” — @stuarthalloway


When we, as a field, for the most part turned away from BDUF (big design upfront) toward Agile methods, we tended, unfortunately, to turn away from architecture visualization and modeling. We've argued here that sketching and modeling is indeed a way to be agile – when it allows us to learn with the cheapest means that will uncover key issues, alternatives, and give us a better handle on our design approach. Not to do this as Big Modeling Upfront, but to have and apply our modeling skills early, to understand and shape strategic opportunity, and to set design direction for the system. And to return to modeling whenever that is the cheapest way to explore a design idea, just enough, just in time, to get a better sense of design bets worth investing more in, by building the ideas out and deploying at least to some in our user base, to test more robustly.

To return to a point we made earlier: in Agile, we're advancing both faces of design as we iterate, deliver working increments of value, get feedback, respond, and adapt. We're not just gaining a clearer understanding of the system; the system is evolving. Users adapt to the system, shifting work to the system or doing new things the system makes possible for them, and new possibilities for the system come into view. And using Agile practices like Test Driven Development and refactoring, we advance the design of the code. We evolve the system, and we evolve the design -- in code, where that is good enough, and with diagrams and dialog and writing where that helps us create better designs, and communicate and evolve them.

Remembering that while the code-as-design-expression is sufficient for the compiler, it is not sufficient for its other purpose — evolution, at least if we want to slow the decay of the system as (essential) complexity is added. For disciplined evolution, we need to support the mental models of the humans collaborating on the system — humans, with our fallible memory, our partially shared theories of the "problem" and the "solution" and the mapping between, and our limited ability to see "across" and "in relation" and to see behavior, without the aid of externalized visual expressions to support our thinking and collaborating. Our code, and our other design expressions (documents, but also diagrams we draw to support dialog, say), are a form of (advanced) stigmergy.

Agile is about learning, sensing and adapting, and it is worth again emphasizing the learning opportunity we have, even when what we have is just design expressions (sketches, models, annotated diagrams, decision records). This may be early, before we've built the system, and along the way, as we develop new system capabilities and mechanisms to address the growing complexity, scale, resilience, and so on. Alternately put, we don't have to have code to begin to learn. We must have code to deliver value, and to ultimately test our design ideas. But we can use diagrams to support reasoning and explore alternatives and figure (literally, even) out promising approaches we then test in code.

Further, we can use sketches and annotations as a way to observe, to study, the system and its design elements. To, as da Vinci showed us, drive our own knowledge and understanding of the design principles that relate structure, behavior and properties. And, again as da Vinci demonstrated, use that understanding to drive our innovative capability. To give us a stock of design ideas from which to draw, to figure out new kinds of problems we can solve with novel combinations of design ideas we've already tested along with design ideas we have a good enough sense of, to try out. We can use this understanding capture to increase our own base of heuristics and design fragments, and pass it along to others we support of the day-to-day coaching work we do as architects.

The end

This talk might have been subtitled “bringing visual design back to the agile table” — or "just enough, just in time visual design." Where just enough varies by context, and its a judgment call (many judgment calls!). And the judgment needs to be that of the designers (including architects) not the “thought leaders” outside of the context.... That’s the upside of “just enough” — it has to be context-aware. The downside, in our rush-rush world, is that we might err on the side of not enough...

To recap, then, we've established -- okay, we've argued — that software architecture is that part of design that entails decisions that are structurally and strategically significant. That is, the integrity of the system depends on them. And more, they shape possibility for the business. Enable and constrain in make or break, business defining ways. At the level of scope of impact of the system in question, of course. These are decisions we need to probe more carefully. Probe in the sense of explore and understand, and probe in the sense of instrument. Visual design helps us expose our design ideas -- so we can see what we mean, run thought experiments across, collaborate to improve. Still, we need to write code, to fully probe, instrument, live-test the design in contexts of use and in operation. And we need to instrument our code, to see how well it is doing, from the perspective of code qualities.



to be continued...


PDF here

Thank you to everyone for the mentions, likes and retweets and reshares on the various social media.




About My Work


I also write at:

- Bredemeyer Resources for  Architects


Software Architecture Workshop:




Architecture astronaut





Copyright © 2019 by Ruth Malan
Page Created: February 9, 2019
Last Modified: August 29, 2020