Cloudera Developer Training for Spark and Hadoop I (CDTSH1)
This four-day hands-on training course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Participants learn to identify which tool is the right one to use in a given situation, and will gain hands-on experience in developing using those tools.
This course is excellent preparation for the CCP: Data Engineer certification. Although we recommend further training and experience experience before attempting the exam (we recommend Developer Training for Spark and Hadoop II: Advanced Techniques), this course covers many of the subjects tested in the CCP: Data Engineer exam. CCP: Data Engineer lets you prove your skills with a rigorous hands-on exam, and promote your skills to potential and current employers.
This course is designed for developers and engineers who have programming experience. Apache Spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Note that after registration, you will gain access to exclusive online courses teaching the basics of Python and Scala; these are designed for people who can already code in another language but who are not yet familiar with either of those languages. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful; prior knowledge of Hadoop is not required.
After registration, students will have access to two exclusive and free online preparatory courses that teach Python and Scala language and syntax basics. These two courses are for those who do not know either language, or who wish to take a refresher prior to the course
Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning:
- How data is distributed, stored, and processed in a Hadoop cluster
- How to use Sqoop and Flume to ingest data
- How to process distributed data with Apache Spark
- How to model structured data as tables in Impala and Hive
- How to choose the best data storage format for different data usage patterns
- Best practices for data storage
Duration 4 days
Price (excl. VAT)
- United Kingdom: £ 2,195.-