ML Model - Container Resource


The EnOS Enterprise Analytics Platform’s MI Hub provides a series of smart asset distribution centers for smart model users and data scientists, and provides a full model registration process and hosting services for model developers.


When a model is developed from an authoring laboratory or in a third-party system, the model developer can deploy the model to a production environment for prediction tasks. Based on user model update requirements, users can deploy multiple models to the production environment at the same time through canary or blue/green deployment. The MI Hub provides model developers with scientific and effective version management tools which can be shared with end users and other collaborators in a safe and controlled environment. Smart assets that other developers have explored or created can also be reused.


For more information, see MI Hub Overview.

Resource Application Scenario

Before using MI Hub to deploy machine learning models, you need to apply for the ML Model - Container resource.

Note

The maximum number of resource instances that can be applied for under each OU is 6.

Resource Specification

When deploying and using Enterprise Analytics Platform, you need to apply for the corresponding resources to support the use of the production functions. The resources requested here work as resource pools in Enterprise Analytics Platform.

Specification Description
Resource Name Resource name should be unique in the OU.
Resource Type

Select Primary Partitions or Subpartitions.

  • The maximum number of Primary Partitions that can be applied for is 1.
  • The maximum number of Subpartitions that can be applied for is 5.
CPU Request The CPU request cannot exceed the CPU limit.
CPU Limit The maximum amount of CPU.
Memory Request The memory request cannot exceed the memory limit.
Memory Limit The maximum amount of memory.
Storage Available options are 5 - 2,000 GB by default.
Permissions Gives the read access for HDFS and Data Warehouse when enabled.

Note

ML Model - Container resources can only be installed on one requested resource pool under the same OU, and the Dev Console can operate on other resource pools under that OU. The ML Model - Container resource splits the resource schema into primary partitions and subpartitions, and the EAP Dev Console must be installed under the namespace associated with the primary partition.