Senior Data Engineer (P4026) Engineering - Cincinnati, OH at Geebo

Senior Data Engineer (P4026)

As a Senior Data Engineer, you will have the opportunity to build solutions that ingest, transform, store, and distribute our big data to be consumed by data scientists and our products.
Our data engineers use PySpark/Python, Databricks, Hadoop, Hive, and other data engineering technologies and visualization tools to deliver data capabilities and services to our scientists, products, and tools.
84.
51 has a hybrid work schedule where employees come into the office 3 days per week, Monday through Wednesday.
Candidates must live in either Cincinnati, OH or Chicago, IL for this position.
ResponsibilitiesTake ownership of features and drive them to completion through all phases of the entire 84.
51 SDLC.
This includes internal and external facing applications as well as process improvement activities:
Participate inthedesign and development of Databricks and Cloud-based solutions.
Implement automatedunit and integration testing.
Collaborate with architecture and lead engineers to ensure consistent development practices.
Provide mentoring to junior engineers.
Participate in retrospective reviews.
Participate in the estimation process for new work and releases.
Collaborate with other engineers to solve and bring new perspectives to complex problems.
Drive improvements in data engineering practices, procedures, and ways of working.
Embrace new technologies and an ever-changing environment.
Requirements 4
years proven ability of professional Data Development experience 3
years proven ability of developing with Databricks or Hadoop/HDFS 3
years of experience with PySpark/Spark 3
years of experience with SQL 3
years of experience developing with either Python, Java, or Scala Full understanding of ETL concepts and Data Warehousing concepts Experience with CI/CD Experience withversion control software Strong understanding of Agile Principles (Scrum) Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM) Bonus Points for experience in the following Experience with Azure Experience with Databricks Delta Tables, Delta Lake, Delta Live Tables Proficient with Relational Data Modeling Experience with Python Library Development Experience with Structured Streaming (Spark or otherwise) Experience with Kafka and/or Azure Event Hub Experience with GitHub SaaS / GitHub Actions Experience with Snowflake Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.
) #LI-DOLF#J-18808-Ljbffr Recommended Skills Agile Methodology Apache Hadoop Apache Hive Apache Kafka Apache Spark Architecture Estimated Salary: $20 to $28 per hour based on qualifications.

Don't Be a Victim of Fraud

  • Electronic Scams
  • Home-based jobs
  • Fake Rentals
  • Bad Buyers
  • Non-Existent Merchandise
  • Secondhand Items
  • More...

Don't Be Fooled

The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible.