While AWS does allow more hands-on control of the model stack, our "Custom Models" are more just different sets of weights using the exact same methodology. Basically, each customer that creates their own custom model is plugging into our existing framework, but with a different configuration.
Because of this, GCP'S AI Platform allows us a more micro-services type approach to interacting with the ML models themselves - as opposed to our previous deployment strategy on AWS, which put all of the models into one big bucket on every instance that was serving requests.
Because of this, GCP'S AI Platform allows us a more micro-services type approach to interacting with the ML models themselves - as opposed to our previous deployment strategy on AWS, which put all of the models into one big bucket on every instance that was serving requests.
Hope that answered your question!