From Enterprise Decentralization, to Tokenization, and Beyond!
Note: This is post which I co-authord with Don Gossman and is cross-posted to the Ocean Protocol site.
We are putting together a blog series that tackles the lowly subject of enterprise transformation. Of particular importance will be data decentralization and how it will drastically change the way in which we interact and leverage data assets in the not-too-distant future (teaser: see Ocean Protocol). Simple topic, we know.
In this first post, we will focus on the pitfalls of centralizing and consolidating IT capabilities, and what alternatives exist.
First Off, Quit Centralizing
Tech centralization, in many organizational and technical contexts, is quickly becoming a footnote in the annals of IT history. There are times and places where it is the best approach, and of course the pendulum has swung from centralized to decentralized and back again numerous times over the years (recall the days of PC’s on every desk which came after mainframes and dumb terminals). During the reign of monolithic vendors during the late 90’s and naughts, the arguments in favor of centralization were clear: standardize and consolidate the technology stacks across the organization to increase economies of scale and encourage reuse. Once complete, then set to work on consolidating the resource pool in order to limit redundancy and under-utilization.
This tactic used to make a lot of sense. But in a digital world, the approach often gets lost in translation.
There are at least three main problems with centralization as it exists now:
- It creates bottlenecks for solution building blocks like tin, compute, storage, and personnel, because you need to go through centralized services that are servicing the demands of the entire enterprise (i.e. “Did you raise a ticket? Good. Now get in line.”);
- It limits deployment capabilities through the standardization of toolsets and associated skillsets (i.e. “We’d love to build that, but our enterprise stack doesn’t support the functionality, and the tech you’ve requested isn’t approved.”); and,
- It places an undue burden on IT as teams are under pressure to rein in costs (“Hey IT, you’re doing a great job! So great, in fact, that we’re going to reduce your OpEx budget by 15% next year!”).
The first two problems can be remedied by changing the approach from one of standardization and consolidation to one of agility and “best-of-breed”. The third is a product of perception. IT is still viewed as a sink across most enterprises, and, as such, is constantly under pressure to reduce its costs. Generally, this cost-cutting manifests in one of two ways:
- Reducing IT headcount; or,
- Shifting to fixed cost models that leverage contractors or Service Providers (SP’s) whose primary function is staff augmentation.
The result? IT hitting their annual targets, and a market that’s a boon for contractors and SP’s alike. Sweet! Done and dusted! Everyone can go home now.
Not so fast! This approach actually ends up creating a battle to the bottom between IT, procurement, and Service Providers as they all do whatever it takes to lower costs. The net effect is that project allocation favors those willing to pull their proverbial pants down the farthest, instead of those that can actually get the job done.
Centralization also tends to favor a distinct delineation between development and BAU (Business as Usual) teams. Traditionally, organizations kept their developer teams separate from their operational support teams. This siloed pattern is the antithesis of Agile and, more specifically, the DevOps approach. Solutions should be developed and supported by the same core teams, not thrown over the fence to an unsuspecting engineering or operations team with constrained bandwidth and possibly little-to-no skillsets for supporting a new solution.
Ultimately, centralization as an exercise in maximizing efficiency can lead to prioritizing “keeping the lights on” initiatives over everything else, especially in severely constrained environments. This goes for both operations, and new product initiatives. What’s the best way to ensure the system doesn’t go down? Never deploy new things! The result is little to no innovation because the capabilities and appetite simply do not exist. A strategy that served the enterprise well for decades has, unfortunately, passed its “best before” date. Companies must now be able to iterate their services and respond to demands as quickly as possible, or face diminishing returns and possible demise.
Instead, Decentralize
“What’s the alternative?” you say. We’re glad you asked!
It’s (drum roll please) Decentralization!
The good news is, the decentralize bug is catching on. We see this through enterprise adoption of the cloud, as well as through the adoption of FOSS (Free and Open Source Software). Even five years ago, the mere suggestion of leveraging the cloud and/or FOSS to build enterprise-grade solutions was enough to get you shunned! Now, the cloud and FOSS are everywhere, from the databases used to host data, to the compute engines used to process data, to the API’s used to connect to data, as well as the portals used to consume it.
By decentralizing, we can remove the shackles of conformity and consolidation. We can scale compute through the power of MPP (Multi-Parallel Processing) on low-cost, commodity hardware. We can reduce storage .costs by storing data in the cloud, and we can tap into skillsets from resources outside the traditional pools or partnerships mandated in the past.
But these primatives are just the beginning, and only provide the figurative spring board in our journey towards full decentralization. In reality, current “decentralization” techniques are, in many ways, limited and simply shift the centralization and single point of failure from one place to another. Consider Big Data ecosystems confined to single data centers, or datasets deployed to single cloud providers (i.e. AWS vs. Azure vs. GCP. So many choices, yet so few answers!). We’re heading in the right direction, but the fact is, we’ve only just left the starting blocks.
In order to jump start the migration to full decentralization, there are two things we can do.
Productize
Once decentralization begins, the next step is to productize. And no, we don’t mean adopting a product-centric macro approach over a customer-centric one. What we mean is, technology solutions should be developed as unique “products”, in which a single product team made up of business AND IT owns the solution and all of its capabilities from end-to-end. In the years to come, IT infrastructure will become increasingly commoditized, but for now we must continue to consider both components.
Let us elaborate
You may have seen consulting collateral identifying different levels of data or analytic maturity within an organization. Adam has a great take on this sophistication model, leveraging the Kardashev Scale but spinning it for the data domain to create the aptly named “Kardashev Scale for Data”. Basically, there are three types or levels of data sophistication within any given organization:
- Type 1 - Essential needs and legal requirements.
- Type 2 - Operational reporting, KPI’s, DWH, Fragmentation.
- Type 3 - Data for the future, BI for decision-support, self-service and democratization, data org. focused on products and hard problems. No SQL monkeys.
It’s the last level on the journey to data maturity that’s of interest to us, as this is when the focus shifts from being reactionary to being progressive. It’s the point at which the focal point changes to one of products, and not just solutions cobbled together. This is significant because it means the institution is mature enough for the interests of both business and IT to align around the creation of unique value propositions. It also signifies that IT is no longer viewed as subservient to business, but as a partner. The entire value proposition is the result of a cohesive unit working together in concert, defining the solution and implementation plan, and overseeing delivery and operations, from soup to nuts.
In other words, productization requires Business and IT, development and operations, working together to deliver unique solutions. There’s no sharing of resources across projects or programmes. It’s a bootstrapped approach with a start-up like mentality where if you want more, you go out and get it and do it yourself.
How does this work? Please allow us to explain.
In this paradigm, business should always be the driver for new products and services, though IT can act as a catalyst. The reason? Because it’s the business’ job, plain and simple. IT can act as a facilitator, but it’s the LoB’s responsibility to understand the market and come up with solutions. The paradigm ultimately relies on pitching the product concept from end-to-end, perhaps to an internal innovation group, and having the business, not IT, fight for budget. If a solution is deemed relevant, let business come up with the money to build it. Once the budget is secured, it then becomes IT’s responsibility to actualize the solution. It must also be understood that this is a team effort in which business and IT are responsible for the overall success of the product. If the product fails, then only the product team is accountable with little to no finger pointing.
This type of approach generally allows for more innovative ideas to be tested and deployed than its centralization counterpart. There are a multitude of reasons for this. First, it predicates the inclusion of both the business and IT within the product team at inception in order to spec out the solution, determine resource requirements, etc. As a result, you don’t just have business tossing IT a list of requirements saying, “Here, go build this. Let us know when you’re done.”
Second, only the product team is responsible for staffing and cost allocation. That means the team can acquire who they want based on skillsets and qualifications, rather than who is available from the internal IT pool, or from the lowest cost service provider. This alone drastically improves a product’s overall chances of success by bringing in the best available talent.
Third, the approach encourages product evolution. Let’s be honest - hubris and ego are fantastic motivators. If you’re a product owner and a competing solution comes out, either internally or externally, there’s a good chance that you will want to iterate a new version of your solution to one-up the competition. Actually, if you’re a responsible product owner, this scenario never happens because you would constantly iterate your solution in order to maintain relevance.
Fourth, productization lends itself well to Agile methodology. We are firm believers that Agile is superior to Waterfall in almost all instances of delivery. We also abide by a approach, which means getting the smallest, relevant component of any given solution into production as quickly as possible, even if it’s a hack job. This establishes a foothold in production (often a win in-and-of itself in large multinationals) that can become a beacon around which the team can rally its efforts.
Lastly, productization necessitates senior level sponsorship. Good luck receiving proper funding without executive stakeholders advocating the solution. Once budget is obtained, these stakeholders become benefactors overseeing the success of the project, helping the team navigate institutional bureaucracy and breaking down barriers when required.
Then Tokenize
So you’ve built a decentralized product from soup-to-nuts, and now your team effectively “owns” the solution platform - the compute, the storage/persistence, the UX and UI, the API’s, the data - along with the resources that built, as well as support, the solution. What do you do now? Well, you could sit back and reap the rewards of your killer app or service, or you can keep on evolving.
Ask yourself this: Is there any dormant capability in your solution? Do you have extra storage capacity, or latent compute? Did your team write some amazing code that could easily be refactored and applied to other use cases, or a framework that could be leveraged to improve processes and solutions elsewhere? Or maybe you created the holy grail of datasets that wraps aggregated transactional and channel feedback data in a warm CRM blanket for full featured customer insights and analytics? If the answer to any of these questions is yes, then boy do we have a proposition for you! (/s😉)
What if we told you there was a way to reduce the friction of commoditizing your assets? It’s not just the app or service you created that offers value, but also the individual capabilities that make up the solution. Through tokenization, you can realize the value of the unique solution capabilities by trading access to those capabilities within a network of like-minded participants.
Let’s explore this opportunity a bit more.
In an ideal world, your platform will hum along at fully optimized levels at all times. In reality, however, there could be times when resources are under-utilized (relative to established solution benchmarks). For instance, there could be periods of idle computation, when extra CPU/GPU/TPU bandwidth is available for use. What if these components were tokenized such that you could “sell” their usage to the highest bidder? This is the supply-side of your network.
Now let’s say there’s a team that needs massive compute capabilities, but only for a limited time, and only sporadically. A good example would be Finance departments in large, institutional banks that are required to do monthly, quarterly, and annual regulatory reporting to governing bodies. Creating a compute platform just for Finance might be overkill. Centralizing the compute is an option, but because it’s communal, you’re probably subject to “first come, first served” dynamics. However, if Finance has tokens, they could effectively pay to prioritize their position, or even reserve the compute, when they need it. This is the demand side of your network.
The last piece of the puzzle is the trustful transfer of tokens across the network from the demand side to the supply side. The good news is, there already exists a crypto-economic mechanism to handle this handshake without the need for expensive intermediary processing. That solution? Blockchain!
Bringing all these components together is the network protocol. The beauty is, these networks could exist in the wide-open yonder of the public domain, or controlled behind a firewall in an enterprise setting. The substrates remain the same for both, only the user base changes.
Please take a moment to digest this, we now have is a mechanism that promotes the tokenization of capabilities at their most granular level, the same mechanism that can inherently stores the usage metrics of these capabilities through immutable records, combined with a network that connects capability producers with consumers.
What we’re talking about here is the potential future of computer networking. This is also, in a nutshell, what we want to do with Ocean. With Ocean, we are creating a platform that links data producers to data consumers, while providing, through blockchain, all of the requisite data security, privacy, and provenance capabilities already baked in. And because the network has blockchain at its core, Ocean has a foundation that underpins the network’s tokens, providing the ability to monetize assets all the way down, and all the way up! Not only that, but because the system is decentralized by design, we can avoid many of the classic issues with centralizing, well, anything!
(Boom! Mind.Blown.)
Conclusion
The potential of decentralization in conjunction with blockchain is astonishing. We are talking about establishing the ability to distribute capabilities that maximize performance and minimize cost, while providing a mechanism that facilitates maximum resource utilization with the transparent quantification and monetization of usage. This paradigm shift has the potential to disrupt not just the way we distribute and leverage data, but the way we fundamentally interact.
Over the course of the next few blog posts, we will delve deeper into the world of enterprise tokenization, and what this means for the future of data platforms, as well as enterprises themselves.
TL;DR
Centralization = Baaaaad (not really, but sort of.)
Decentralization = Goooood (#thumbsup)
Productization = Even Better (now we’re talking!)
Tokenization = Data + Compute + Storage + Network + Tokens = $$$$$$ (what the what?!?)