May
22
10:30pm
Deep Learning Hands-On Series - Data Exploration and API First Design
By IBM Developer
In this tutorial we will go through how to analyze data, when you can trust your results and when you can't. This is a crucial first step. If your assumptions about your data are wrong then there is nothing there. Putting something into production that doesn't make sense or hold up to scientific rigor is useless. The exploratory analysis ensures that what you are trying to build will actually work without wasting engineering's time.
Once you are sure that there is a pattern that can be learned. The next step is to bring in engineering resources to make sure you can start building the engineering system as quickly as possible. For most machine learning systems there is a lot of engineering that needs to be done, but doesn't really rely on the science. Building the API first and then figuring out everything else is really important. This way, you don't waste time, and folks can work in parallel.
What you will learn
- Experiment specification and testingdescriptive statistics hypothesis based testing model specification
- A practical introduction to deep learningforward propagation and back propagation batch versus mini-batch the bottleneck principal learning rates activation functionssigmoid and tanh exploding and vanishing gradients elu, relu, leaky relu, and others loss functions
- API First DesignWhat is an API? What does engineering need to know, to get started?
- Clean Codenaming functions doc strings classes inheritance and composition making modules - code structure
- Unit Testingwhat is a test? test flavors testing in machine learning - what can you test?
- CI/CDCI basics CD basics
Who should attend
This workshop is for folks somewhat new to engineering but deep in statistics and machine learning. For people looking to make the transition from jupyter notebooks to building systems. Some of the statistical knowledge will be assumed.
Prerequisite
Some knowledge of
- Python
- Statistics
- Machine Learning
Speaker
Eric Schles is a senior data scientist with 6 years of full time experience. During his time in industry he has worked in the anti human trafficking, cancer research, government and big tech spaces. During his time at Microsoft he worked as a consulting engineer building production systems for fortune 500 and 100 clients all over the world. During his time in government he worked for the Federal Reserve in San Francisco, the White House and the General Services administration, bringing data science into the procurement process, health systems, human resource systems and in various inter agency consulting capacities. In addition, he worked on strategic cross federal initiatives such as the white house data council and various internal research goals to bring data science to federal agencies as well as assess readiness of agencies for data science.
Host
Upkar Lidder, IBM Data Science and AI Developer Advocate, https://www.linkedin.com/in/lidderupk/
hosted by
IBM Developer
share