Machine Learning in Production: Developing and Optimizing Data Science Workflows and Applications by Andrew & Adam Kelleher (2019).
At the onset, I find myself curious as to what are the data primitives in an ML system. The first time I made an ML system, it was a “cluster” of Juptyr notebooks that I would run manually over a simple Postgress database to make some predictions. The notebook cells were all over the map (could have used some serious refactoring), and running notebooks didn’t really seem like a production system, but it got the job done. I bought this book at some point during COVID winter so that I might bone up on how it’s really supposed to go down.
So, besides a series of notebooks with little Postgres drivers, how are really supposed to play around with models, pin them down, get them test-covered, and approve them for a production system? Is this just something you have to pick up on the job, or are there emerging design principles.