Understanding Enterspeed: Navigating new concepts through 5 topics
You might regard Enterspeed as something new and different – but our concepts and tooling are actually well known. However, we realise we’re treading new paths and that the promised agility through composable architecture is driving new complexity.
In this blog post we’ll explore 5 topics that often raise questions when we introduce Enterspeed and composable architecture. Let’s begin on the conceptual level and then move down the stack.
1: Composable concepts drive complexity
Transitioning to a composable architecture is a significant shift from the traditional monolith and in some ways it introduces more complexity than the "good old" monolith. Managing this complexity is a concept you’ll need to learn and master.
In a monolith, you have fewer codebases and systems to manage and operate. A composable setup, on the other hand, involves managing at least three systems, or four if the website is a commerce solution:
- CMS (e.g., Umbraco, Contentstack)
- Search (e.g., Algolia, Typesense)
- Frontend build/hosting (e.g., Netlify, Vercel)
- Commerce (e.g., commercetools, Commerce Layer)
If user profiles are needed, an authentication service (e.g., Auth0, IdentityServer) is also essential. Additionally, an image and media hosting/processing component (e.g., Cloudinary) might be useful.
In both monolithic and composable setups, you might also have a PIM, ERP, and other various back-office systems in the stack. It almost goes without saying that the more systems and services you have in your IT landscape, the more complexity you encounter.
We see Enterspeed as a tool that helps manage this complexity. Nonetheless, becoming comfortable with more systems and managing complexity is a skill you will need to develop with Enterspeed and in a composable environment.
2: More glue
With more systems needing to work together, you'll require more code that glues these different systems together. No single system can do everything well, which is why the “best of breed” movement has gained significant ground. However, this approach can lead to rough edges where different systems are layered on top of each other.
So, let’s delve into a concrete example with Enterspeed.
Imagine you have a commerce site and one of the business requirements is that out-of-stock products are not eligible for purchase. In this scenario, consider a server-side rendered frontend.
In our example, Enterspeed is used to deliver data to the product detail page. The stock information needs to be fetched in real time, combining data from the ERP system and the commerce system.
Our recommendation is to develop a custom Backend for Frontend (BFF) layer that integrates the ERP and commerce systems, fetching the stock information client-side. We recommend a client-side approach for stock information requests to ensure that server responses are as fast as possible – ideally under 300-500 ms. If your ERP and commerce system can deliver such performance, you could opt for creating one custom BFF layer integrating Enterspeed, ERP, and commerce for a single endpoint for the frontend.
Writing glue applications often becomes a requirement in the composable world. Some of our integration partners use Azure Functions and Netlify/Vercel functions to integrate the discrete services and backend systems.
In the next section, we’ll move further down the stack and zoom in on Enterspeed.
3: Writing schemas and processing source entities
When working with Enterspeed it’s essential to understand how you write schemas. Schemas define how data should appear when requested by the frontend. They are small scripts triggered each time a new version of a source entity is uploaded into Enterspeed. A source entity in Enterspeed is equivalent to a document, or what is referred to as a node in some CMSs.
We've already mentioned the concept of triggering a schema, which determines the source entity types the schema should run for. Next is routing, where you define the key for retrieving the pre-processed data. We also have actions, which involve using destinations and reprocessing other schemas.
A full Enterspeed tutorial is beyond the scope of this post, but these are the basic concepts you need to understand about schemas and processing source entities.
You can, however, check out our dogs for our 3 step process out docs 👉 Overview | Enterspeed Docs
4: Asynchronous processing explained
A crucial concept to grasp in Enterspeed is asynchronous processing. In Enterspeed, data is prepared before the user requests the page. This is a departure from typical headless APIs, which often require a round trip to a database, executing a query to return relevant page data. In Enterspeed, this query is executed at what we call processing time or, from the CMS user's perspective, publishing time. In some ways, Enterspeed has inverted the typical data fetching flow.
Check out more on a in our patterns 👉 Asynchronous Processing
When learning to use Enterspeed, it's important to understand that you can't simply add a new parameter in the request to Enterspeed when you want the response to behave differently. Since Enterspeed doesn't perform database queries when the user requests content, the response must be prepared ahead of time. This approach requires tackling some problems with a different mindset.
Let's consider a simple example. Imagine you have a blog where each post includes the author's name. A common structure in a CMS is to have the post and the author as separate documents. In a synchronous approach, you would have a query on the blog post API request that fetches the current name of the author. In the asynchronous approach used by Enterspeed, this works differently.
Remember, in Enterspeed, querying is not possible when you request the API to fetch the blog post. This pattern is solved by using references. The query is executed when the blog post is updated in the CMS and ingested into Enterspeed. Updating occurs at publishing time.
But what if the author's name changes? We solve this using references. References work somewhat like HTML's iframes or img-tags. When the author source entity is updated, a specific view for the author source entity is processed and made ready for the Enterspeed Delivery API. The final response for the blog post fetches the already prepared author view at request time.
With asynchronous processing, the two parts of the final response are prepared in individual processing jobs. This leads us to the concept of eventual consistency. Due to the asynchronous nature of processing, Enterspeed doesn't guarantee that the referenced view will be included immediately. If the blog post is ingested into Enterspeed before the author page, Enterspeed will respond with the result without the author information.
5: Deployment considerations
Finally, we want to comment briefly on deployment. Deployment is sometimes an overlooked aspect, yet an efficient deployment pipeline is a key indicator of business value creation. With Enterspeed, we provide a CLI tool that can be integrated into your deployment pipeline. As with other tools, familiarizing yourself with the CLI is essential.
The process of deploying Enterspeed schemas is somewhat unique. While deploying code typically results in immediate effects post-deployment, deploying an Enterspeed Schema initiates the pre-processing stage. To manage this delay, one of two strategies can be employed, both familiar in the context of API versioning.
The first strategy is a staged rollout. Here, Enterspeed schemas can be viewed as an API contract. If the data model changes, the contract might break. Initially, make changes to the schemas that are backward compatible with the frontend code. Next, deploy new frontend code that understands these changes. Finally, update the Enterspeed schemas to remove now-obsolete properties.
The second strategy follows the blue/green deployment pattern. This approach involves creating two Enterspeed environments. You wait for the pre-processing to complete in one environment before deploying the updated version of the frontend in the other.
Why should I?
But we haven't yet tackled perhaps the most important question: Why? Why should you embrace new concepts and manage more complexity? For us, the answer lies in increased business agility and improved user experience through better performance.
Composability is a significant trend, promising greater business agility. Delivering top-tier performance requires a different mindset, and by turning the typical data fetching process on its head, we provide the performance that drives the best possible user experience.
Check out more on composable and how to manage it 👉 Content Federation: How to manage your composable stack
20 years of experience with web technology and software engineering. Loves candy, cake, and coaching soccer.