4th July 2023

The 3 ways to exchange Grasshopper data

The 3 ways to exchange Grasshopper data

I have written about Grasshopper's distributed data model (DDM) and how it employs Rhino files as a means to exchange data between Grasshopper files. However, Rhino files aren't always the right format as they are only good at storing geometry and some metadata.

So, if you have a sizeable amount of raw data to transfer between Grasshopper files, there are other methods to do so, which we will get into today. But, as we explore the methods, you'll find that some understanding of programming (particularly databases and serializers) becomes increasingly crucial. Nevertheless, I still think it's important to know all the ways you can transfer data between files so that you pick the most appropriate one for your project.

Using Rhino Files

To start off, one of the easiest ways to transfer data is by using plain Rhino files. This is my preferred method because it's easy to distribute, and you don't have to worry about formats, versions, or any additional files.

Pasted image 20230718170358.png

You can read and write data the manual way with baking and referencing but my favourite way of dealing with Rhino files is through the eleFront Grasshopper plugin which let's you programmatically exchange data with Rhino files. You also have the ability to attach metadata to your geometry and I use that a lot to store my tree-paths, so that I can preserve the order of my data. I wrote extensively about the plugin here. This is especially useful when implementing the DDM in your projects.

However, like I mentioned before Rhino files aren't the best format if you have a lot of metadata because too much metadata will drastically slow the performance down and it will take longer to load your files. It also affects the speed of your Grasshopper files as you have less overall working memory.

Using Text Files

A good work around using Rhino files is to separate out your geometry data and your metadata. By letting Rhino store the geometry (which is what it's best at) and using text files to store your metadata, you can enjoy the best of both worlds. I like this a lot more because you are playing to the strengths of the programs, Rhino for geometry and other methods for the metadata.

So, depending on the type of your metadata and how you want to use it, you might need some understanding of serializers (just enough to know what .json is and how to use them). But we will start small and look at a simple format first.

CSV Files

CSV, or Comma Separated Values, files are row and column-based files very similar to what you see in Excel. In fact, you can open CSV files in Excel and export any Excel file as a .CSV file but one key layout difference is that you cannot have merged cells in a CSV file; it has to be a pure matrix of rows and columns.

Pasted image 20230718170408.png

This is a simple format to use because all you have to do is specify the columns of your data. Not to mention, it is easy to read/write .CSV files using Grasshopper. You can use a bit of data-tree manipulation or use a plugin like lunchbox to help you with it.

But, what happens if you need to transfer more complicated data ? Like if you wanted to store your model's colours, reference position, grids, etc. Then, you a format that has more features, like the .JSON format.

JSON Files

If you find the format of CSV files too restricting, it typically means you have a tree-like data structure. For this, JSON, or JavaScript Object Notation files are one of the most widely used formats for data transfer on the web. It's still text-based, so it remains quite readable when opened in notepad or (preferably) a JSON viewer.

Pasted image 20230718170529.png


While JSON files seem to be the better choice, they come with their own set of caveats. Two of these are particularly significant. Firstly, any system using the JSON file must know its definition. Secondly, the same system must also know how to decipher this definition (this is known as de-serializing). It's similar to dealing with a locked file: everyone involved needs to have access to the same file (the JSON definition) and the key to unlock that file (the de-serializing logic).

The good news here is that Grasshopper plugin like jSwan and LunchBox can help manage JSON definitions. I use jSwan a lot because it's very flexible and let's you create and modify the different keys and values in a JSON file. For more complex operations, I prefer to just use code to handle it because it gives me the most control.

But, the two caveats I mentioned earlier are enough reason for me to prefer CSV files over JSON unless I really need a tree-like data structure. I would even prefer using 2-3 CSV files over a single JSON file. The reason is because JSON files are really good but the trouble of managing the definition and serializing logic is not worth it in most cases.

However, the caveats I mentioned earlier often make me favor CSV files over JSON files, unless a tree-like data structure is absolutely necessary. I might even choose to use 2-3 CSV files rather than a single JSON file. The burden of managing the definition and serialization logic often outweighs their benefits in most scenarios.

So far, what I have shown is centered around transferring large volumes of data between Grasshopper scripts. But what if the scenario shifts, and you only need to exchange a small subset of data while retaining the larger whole? This is where databases come in.

Using Databases

When you read and write text files, you are forced to load all of it's content in Grasshopper's memory. If you only need a subset of that data, text files will impose a substantial overhead on Grasshopper because of all that unnecessary data.

Although databases might sound scary, it is actually quite similar to the CSV or JSON file format that we looked at earlier. However, you do need to know a database's query language to extract out the relevant data. They are straightforward to learn but it's just another pre-requisite.

Databases do come with lot more functionality than just being tables. There is a whole world about database science and how to store, extract and modify data. The common theme about databases is that they are good at accessing information.

There are two types of databases, structured and unstructured. Structured databases resemble.CSV files, where data is in table form with rows and columns. Unstructured databases are like JSON files and just use text to store the data. If you need to extract out data from a larger dataset, you need a structured database.

Note: There are more than 2 types of databases, but for simplicity I am only going to talk about 2

Structured Databases

SQL, one of the most common types of structured databases. It is a series of relational tables that can be queried from. It's similar to having multiple CSV files.

Pasted image 20230718170539.png

Grasshopper Plugins like Slingshot! let you write SQL queries in Grasshopper to extract out the data from a SQL table. Furthermore, if you use something like SQLite, which lets you write SQL tables into file, you get the best of both databases and a file based system.

If you invest the time into learning about databases, they are a very useful tool to have. They can be a centralised storage for your data by hosting them on a server and having everyone connect to it. You can implement an Object-relational mapping (ORM) structure to keep track of changing data definitions. Databases give you a lot of control over your data and you can make it as complicated or as simple as you need it to be.

Final Thoughts

Throughout this article, we've discussed various ways to exchange data between Grasshopper files, from Rhino files to text and database transfers. Each method has its unique strengths and weaknesses, and your choice will depend on the type of data you're dealing with, the volume of data, and your comfort level with programming concepts like serializers and databases.