Unidata Master Data Management
Unidata Master Data Management (MDM – Master Data Management) product is designed to manage the main data of an enterprise, combined into "gold records" from various sources, such as data on customers, counterparties, individuals, legal entities, contracts, material and technical resources, NSI and any other data that can be identified as basic. Thus, the company gets a single point of access to clean, harmonized and high-quality data.
Master data management. The core of the product is responsible for storing all data about gold records, history of changes. The ability to embed the product into a heterogeneous information landscape of any complexity. Loading, validation, enrichment of data from various sources. The product allows you to manage multiple information domains thanks to a powerful modeling system.
- Centralized collection of master data (inventory and resource accounting)
- Standardization of information (normalization and enrichment)
- Current and historical information accounting (record version control, data validity periods)
- Data quality and maintaining statistics
An integral part of the data processing, which includes the following phases:
- Data filtering
- Data validation
- Data cleaning
- Consistency checks
- Enrichment from internal and external sources
- Activity logs
- Access control
- Storage of confidential data
- Reliable means of communication
- Exact match considering noise
- Fuzzy search considering discrepancies (Fuzzy)
- Phonetic algorithms
- Search considering the attribute specifics (address, full name, etc.)
- Hierarchy of rules and groups of rules
- Custom strategies
- Manual and automatic modes
- Support for data management regulations
- Strategies: entry in whole or by attribute
- Extension strategy
- Trust levels (data quality)
- Business rules (BRMS)
User interface for data modeling:
- Attribute data model
- Quality and Cleaning Rules
- Search for Duplicates / Consolidation
- Data Management regulations
Data model lifecycle management:
- Transfer between environments
- Building a model based on existing data sources
- Public APIs for model management
- Model drafts
- Checking the model for integrity and consistency
- Adding existing data to the model
It has all the features of modern MDM class products: loading data from various sources, consolidation, deduplication, harmonization, classification and quality assurance. It has an administrative and user interface. It has all the capabilities for embedding into the IT landscape of an enterprise of any complexity, including integration with security systems. The edition allows you to create and maintain a data model of any complexity, expand functionality through extension points and maintain high performance.
In addition to the wide standard features of the platform and the standard edition, it has additional capabilities in data quality assurance and deduplication. These are more complex search algorithms, and the ability to model your own composite quality rules using the built-in editor. The ability to apply the quality rule, taking into account the fields of the classifier, is included. The ability to subscribe and publish data update events using the PubSub module is added. The edition has the maximum set of modules, which is suitable for the IT landscape of any complexity.
It is used in cases of not just large amounts of data, but special requirements for performance and fault tolerance. To achieve high performance, our own developments for cluster management and sharding of the main database are used. The main modules for consolidation, quality management and deduplication also differ significantly from the standard and enterprise editions. Technical support requirements (SLA) are also maximum to ensure fault-tolerant work 24/7.