Bulgaria wants you logo
Rotating arrow icon

This profile is deactivated.
Do you want to reactivate your profile?

You will receive an activation email

Work in Bulgaria?
Business Park Sofia, Building 5B, 1766 Mladost 4, Sofia

IBM Bulgaria

Sofia

Cloud Data Engineer

Job description

Introduction
At IBM, work is more than a job – it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.

Requirements for the position

Your Role and Responsibilities

Our teams of Data Engineers work directly with customers and each of us must be able to explain technical content to people who might be less familiar with the technology. You will participate in regular remote meetings, visit the clients on-site when needed, in order to discuss the specific solutions, you’re going to implement on the clients’ side. We discover the data-driven opportunities that impact clients’ business, validate opportunities with the stakeholders to mitigate business and technology risk. Our data-driven business is to build data platforms and intelligent applications that help clients’ leverage the full potential of data. With a team of Data Engineers, we implement a production-grade data platform based on IoT data lakes and machine learning-based analytics.

Responsibilities:

– Creating innovative new applications and building AI solutions for tasks previously thought too difficult for computers complex and distributes tasks;
– Building systems for ingestion, transformation and storage of vast amounts of data requires special expertise;
– Designing, architecting and implementing modern cloud-based pipelines for our customers;
– Making data accessible and usable for a new wave of data-powered apps and services.

Required Technical and Professional Expertise

– 3 years of professional programming experience and hands-on experience in building modern data platforms/pipelines
– Programming languages: Python, Java, Scala;
– Frameworks: Spark/PySpark, Hadoop, Hive, Pig, Kafka;
– Tools: Jupyter Notebooks, BI Tools;
– Databases: NoSQL, Relational databases (columnar & row-oriented), Graph databases
– Data pipelining: ETL, ELT;
– Consultancy experience;
– Previous experience gained in mid-size/large, international companies;
– Fluent English – both written and verbal.

Top companies

More companies