The Distributed Data Model

On this Page

Regardless of how tidy and organized your Grasshopper script is, it becomes increasingly challenging to manage the growing complexity, especially when your model has many different modelling parts.

Even if you manage to keep your script somewhat tidy, the sheer number of operations can quickly render it barely readable and difficult to manage. It's a natural progression in a large project’s lifecycle for Grasshopper scripts to expand in size, and it's unavoidable.

However, there is a solution—a workaround that involves using several smaller Rhino files and Grasshopper scripts instead of a single gigantic file. This approach is known as the distributed data model, and it is one of the most effective ways of working with large-scale projects.

The distributed data model is similar to the “small functions” best practice in the programming world. Where you should have many smaller functions instead of one large, monolithic function. The concept is that smaller functions are easier to read, manage and edit than larger functions are. The same is true for Grasshopper scripts.

Furthermore, this data model uses Rhino files as the medium for transferring data between these smaller Grasshopper scripts. This approach not only enhances script organization but also improves performance. Because it reduced the amount of data that Rhino needs to handle at any given time. So even though the overall amount of data is increased, the data per file is lower.

Let’s walk through the other benefits of the distributed data model.

The Single Responsibility Principle

The distributed data model in Grasshopper adheres to another programming best practice known as the single responsibility principle, which represents the 'S' in the SOLID Principles of good programming.

When working with a multi-faceted model that has a lot of complex parts, it's easy to become overwhelmed by the number of operations involved. By dedicating a Grasshopper script to each part of the model, you can simplify and clarify the entire process.

Not to mention, with multiple “single-responsibility” Grasshopper scripts, you can make changes to a specific part of the model without causing too many disruptions to the rest of the model.


Another significant benefit of splitting the modelling process into multiple Grasshopper scripts is that you can collaborate with others on the same model. This distributed approach lets multiple people work simultaneously on different parts of the model without conflicts or file clashes.

Additionally, from a project risk perspective, having multiple people working on the model offers several advantages over relying solely on a single person. Distributing the workload helps to avoid risks by diffusing responsibility and promoting internal checks.

A Bespoke System

I am a big fan of the flexibility that Grasshopper offers, but I believe that to fully harness its potential, introducing some structure is essential. This is where the distributed data model comes into play, providing much-needed structure.

By breaking down the daunting task of modelling into smaller, more manageable models, the distributed data model lets you plan and structure the interactions, making the entire process more robust. It gives rise to a unique Grasshopper/Rhino system tailored specifically to your model and modelling process.

I’ll dive deeper into this data model later because there is a lot more to cover. For now, here are some tools to get you started.


As Rhino serves as both the 'input' and 'output' for the various Grasshopper scripts, utilizing 'baking' and ‘layer reading’ type components can greatly simplify and streamline the process.

With that in mind, I would like to recommend two tools that specifically focus on these aspects. But, there are many other tools available on the internet that can help with this.


Elefront is a plugin that focuses on the data interaction between Grasshopper and Rhino. Letting you seamlessly bake to Rhino, read from Rhino layers and more.

In fact, this plugin was used in a significant showcase for the distributed data model, when the architect company, Zaha Hadid, used this data model to design the Morpheus Hotel. You can read more about it here.

Elefront is one of the most useful plugins in the distributed data model because it lets you programmatically read and write to Rhino.


“Work-sessions” is a feature that comes natively with Rhino 7 (Windows). A work session allows you to work with multiple Rhino files in one Rhino instance. it essentially lets you “attach” multiple Rhino files onto an active one.

This is useful because you only need to “attach” the relevant Rhino files for the right Grasshopper script. You can also save work session files (.rws) which remember all the Rhino files you had opened before. To access the work session panel, just type “Worksession” into the command bar in Rhino.

Final Thoughts

The distributed data model in Grasshopper presents a compelling solution to the challenges posed by using a single large Grasshopper and Rhino file for complex multi-faceted models.

By following programming best practices like the single responsibility principle, it breaks down the large modelling task into smaller, more manageable components, bringing structure and collaboration to the forefront. Additionally, the use of smaller Rhino and Grasshopper files also enhances the performance and robustness of the entire modelling process.

There is a lot more to be said about the distributed data model because I believe that it is one of the most crucial methodologies when it comes to working on a complex model with a team. So, I would like to explore it more in the future.

Thanks for reading