There’s more to true mining data integration than meets the eye… Eclipse sat down with its Head of Strategic Alliances, Barry Henderson, to gain his insights.
Data integration is a hot topic in mining at the moment. While there are many benefits to be gained, including increased systems interoperability and less siloed ways of working, as well as wider deployment of advanced mining technologies such as automation and analytics, it’s only possible to see the full potential of these technologies and for their application to be sustainable if the data is truly integrated.
Is bundling APIs the same as integrating data?
While more and more mining software providers now claim to have ‘vendor-neutral’ or ‘vendor-agnostic’ platforms, in reality, this often entails bundling of application programming interfaces (APIs) to allow their programs to integrate vertically, horizontally or point-to-point with those from other companies.
Barry Henderson, Head of Strategic Alliances at Eclipse Mining Technologies, explains: “The change from promoting a software product to a platform is primarily driven by the desire to create multiple revenue streams. This is often reactive to a stagnant or declining single revenue stream from a legacy product.”
Henderson is a well-known and respected executive within the mining software realm with over 34 years’ experience under his belt. A former CEO at Maptek, Henderson also led RPM Global and Mine Vision Systems before joining Eclipse in 2019.
According to him, nearly all of the mining software products built in the past few decades have been standalone desktop applications. ‘Integration’ of data between competing products has historically taken place using read-only APIs exchanged between certain vendors. The legacy nature of these products has made the integration of disparate systems from different vendors difficult at best for miners.
“Traditionally, data integration in the mining industry has been a back end, retrospective process where systems are vertically integrated which maintains siloing across the value chain,” says Henderson. “True data integration involves the consolidation of data from multiple sources into a single dataset allowing for consistent business intelligence and/or use of analytics.”
Today, the customer’s desire to focus on the process has switched from retrospective to real-time (or near real-time). Very often the transfer of data from one mining software program or platform to another occurs through manual, time-consuming data transfer processes.
Henderson adds: “When most mining software vendors say that they can integrate other vendor’s data, what they are really saying is they are able to read data from other vendors software. While it’s technically possible for a ‘legacy’ software company to fully recreate their existing products to be truly vendor-agnostic, it would require the move towards more open data-format ecosystems combined with substantial investment in time and money.”
Most legacy software companies view their proprietary data formats as intellectual property, and this has made the move towards open data formats difficult or impossible. Furthermore, from an internal perspective, legacy system’s proprietary data formats have traditionally been viewed as a barrier to the use of competing products by customers.
Many vendors are keenly focused on maintaining their user base, and current users do not expect to pay for the modernizing of a software product. The result is a hesitancy by vendors to even consider recreating their products to enable them to easily integrate.
“Customers are by far the biggest winner when a truly vendor-agnostic or neutral data ecosystem is implemented organization wide,” explains Henderson. “Having a data ecosystem where all unrelated systems possess seamless integration allows the customer to realize improved data accessibility and accuracy, greater automation and streamlining of most processes, and increased efficiencies across the entire value chain.”
But what is true data integration? What does it involve and why can’t it be achieved by all software platforms?
First, it’s important to understand that there are different types or levels of data integration. Kinza Yazar and Tim Ehrens describe these and what a customer can expect with each type in the TechTarget Customer Experience 2022 report: “There are a variety of methods for achieving connectivity between unrelated systems,” they write. “Based on the type of usage and business requirements, there are four common integration methods.”
The authors outline these as follows:
1. VERTICAL INTEGRATION: This strategy enables an organization to integrate unrelated subsystems into one functional unit by creating silos based on their functionalities. Each layer or element in vertical integration works upward and the process of integration is expedited by using only a handful of vendors, partners and developers. Considered to be the quickest method of integration, it can also be the riskiest, as it requires a significant capital investment.
2. HORIZONTAL INTEGRATION: The horizontal integration method, also known as the enterprise service bus (ESB), assigns a specialized subsystem to communicate with other subsystems. It reduces the number of interfaces connecting directly to the ESB to just one, decreasing integration costs and providing flexibility. It’s also considered a business expansion strategy, where one company might acquire another one from the same business line or value chain.
3. POINT-TO-POINT INTEGRATION: Also, commonly known as star integration or spaghetti integration, this method interconnects the remaining subsystems. Once connected, these interconnected systems resemble a star polyhedron. Most companies segment their processes during point-to-point integration. For example, a separate accounting system could track finances; a web analytics system could manage website traffic; and a customer resource management (CRM) system would integrate Salesforce. Depending on the organizational needs, data from each system could be pulled and combined.
4. COMMON DATA FORMAT: The common data format helps businesses by providing data translation and promoting automation. This method was created to avoid having an adapter for every process of converting data to or from other formats of applications. For integration to take place, enterprise application integration is used, which enables the transformation of the data format of one system to be accepted by another system. A popular example of a common data format is the conversion of zip codes to city names by merging objects from one application or database to another.
Enabling True Data Integration
Eclipse Mining Technologies’ SourceOne® uses a common data format and is therefore truly vendor neutral.
Henderson extols the benefits: “Not only is reliable data integration the key to completely understanding mining operations and improving their productivity, but it also prepares them for technology that is right around the corner, including advanced analytical software that uses artificial intelligence and machine learning,” he said. “There is no question that proper data integration is essential for efficient mining.”
SourceOne integrates and centralizes mine data, allowing leaders at every level of the organization to inspect operational data, including its history and context, and gain valuable insights.
“Every mine strives to reduce needless efforts, shorten completion times, and improve quality output,” added Henderson. “The SourceOne Enterprise Knowledge Performance System is the industry’s first dedicated data hub with a vendor-neutral system that can track processes continuously, helping to identify, analyze, and implement changes to enrich the existing process. The results are fewer bottlenecks and risks, and significant improvements in productivity.”
In Part 2, we sit down with our Director of Product Development, Sean Hunter, to delve deeper into how SourceOne EKPS handles working with a common data format.
To learn more about SourceOne contact Barry Henderson: firstname.lastname@example.org or visit eclipsemining.com