Introduction to Python
for Data Science
In this bootcamp, students will be learning core Python concepts
and understand how to use the language as it relates to data science.
This Python bootcamp focuses specifically on data science. Throughout the course, you will practice the basics, learn more advanced techniques, and finish with a project to put all your newfound knowledge to use. In 5 days (5 – 6 hours per day) you can get started and polish your Python abilities for data science and analysis.
You will earn a data science certificate with 7 Continuing Education Units through
The University of New Mexico.
What you will learn
This module will focus on building an understanding of how to Gather data from different sources. We will explore commonly used Python data structures such as data-frames and dictionaries and will learn about working with data from different data formats
In this module, we’ll be looking at Python’s powerful data handling techniques. You’ll be introduced to different methods that you will use to get the information you need from data stored in complex data structures.
Once we’ve loaded our data into your Python environment, we’ll learn how to process it before it can be used for further analysis. This includes cleaning the data, reshaping it, merging data from multiple sources/data structures, and applying transformations.
In this module, we learn about python’s powerful data aggregation and grouping techniques and how to use them to get useful insights from our data.
In this module, we will focus on understanding how to dissect and explore our data to uncover useful insights. We will also look at different visualization techniques to help present our data and findings and get acquainted with popular visualization packages in Python.
This module will be focused on the RESTful API, its functions and how it can be used to communicate with websites like Twitter and Reddit that have exposed endpoints to scrape data.
Once we’re comfortable with the RESTful API, we’ll look at how it can be used to get data from websites. We’ll use the data we scraped along with other data loading, parsing, cleaning and exploration techniques we learnt previously to build an end-to-end data pipeline.