16 Spark jobs in Oman
Big Data Developer
Posted today
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
Direct message the job poster from RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS)
Talent Acquisition Specialist at RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS) | Connecting Top Talent with Leading Companies !Company : Ridik CLPS
Job Description:
We are seeking a Senior Big Data Engineer with deep expertise in on-premises Big Data platforms, specifically the Cloudera ecosystem, and a strong background in the telecom domain. The ideal candidate will have 12–15 years of experience building and managing large-scale data lakes, developing batch and real-time pipelines, and working with distributed data systems in telecom environments.
While the primary focus is on on-premises Cloudera-based data lakes, familiarity with cloud data services (AWS, Azure, or GCP) is considered an added advantage.
Key Responsibilities:
- Design, build, and maintain on-premises data pipelines using Apache Spark, Hive, and Python on Cloudera.
- Develop real-time data ingestion workflows using Kafka, tailored for telecom datasets (e.g., usage logs, CDRs).
- Manage job orchestration and scheduling using Oozie, with data access enabled via Hue and secured through Ranger.
- Implement and manage data security policies (Kerberos, Ranger) to ensure compliance and controlled access.
- Develop and expose REST APIs for downstream integration and data access.
- Ensure performance tuning, resource optimization, and high availability of the Cloudera platform.
- Collaborate with data architects, engineers, and business stakeholders to deliver end-to-end solutions in a telecom context.
- Support data migration or integration efforts with cloud platforms where applicable is added advantage.
Required Skills & Experience:
- 12–15 years of experience in Big Data engineering, with hands-on focus on on-premises data lake environments.
- Extensive Telecom domain knowledge – including data models and pipelines related to CDRs, BSS/OSS, customer and network data.
- Strong practical experience with:
- Cloudera (CDH/CDP) components: Spark, Hive, HDFS, HBase, Impala
- Kafka: configuration, topic management, producer/consumer setup
- Python for data transformations and automation
- Access control and metadata management using Ranger
- Proficient in performance tuning, resource management, and security hardening of Big Data platforms.
- Experience with API development and integration for data services.
- Optional/Added Advantage: Exposure to cloud platforms (AWS EMR, Azure HDInsight, GCP Dataproc), hybrid architecture understanding.
Preferred Qualities:
- Strong problem-solving and troubleshooting skills in distributed data systems.
- Ability to work independently in a fast-paced project environment.
- Effective communicator with both technical and business teams.
- Experience in mentoring junior engineers or leading small technical teams is a plus.
About CLPS RiDiK
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries Telecommunications
Referrals increase your chances of interviewing at RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS) by 2x
Sign in to set job alerts for “Big Data Developer” roles. Python and Kubernetes Software Engineer - Data, AI/ML & Analytics Python and Kubernetes Software Engineer - Data, Workflows, AI/ML & Analytics Software Engineer - Data Infrastructure - Kafka Software Engineer - Data Infrastructure - OpenSearch/ElasticSearchWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Specialist
Posted 2 days ago
Job Viewed
Job Description
Big Data Expert
Job Summary
We’re looking for Big Data Expert to support our team in Muscat, Oman .This role offers the opportunity to work on meaningful projects, collaborate with talented colleagues, and contribute to the success of a growing company. If you’re someone who takes initiative, values continuous learning, and thrives in a collaborative setting, we’d love to hear from you.
Role Description
Job Description:
We are seeking a Senior Big Data Engineer with deep expertise in on-premises Big Data platforms, specifically the Cloudera ecosystem, and a strong background in the telecom domain. The ideal candidate will have 12–15 years of experience building and managing large-scale data lakes, developing batch and real-time pipelines, and working with distributed data systems in telecom environments.
While the primary focus is on on-premises Cloudera-based data lakes, familiarity with cloud data services (AWS, Azure, or GCP) is considered an added advantage.
Key Responsibilities:
- Design, build, and maintain on-premises data pipelines using Apache Spark, Hive, and Python on Cloudera.
- Develop real-time data ingestion workflows using Kafka, tailored for telecom datasets (e.g., usage logs, CDRs).
- Manage job orchestration and scheduling using Oozie, with data access enabled via Hue and secured through Ranger.
- Implement and manage data security policies (Kerberos, Ranger) to ensure compliance and controlled access.
- Develop and expose REST APIs for downstream integration and data access.
- Ensure performance tuning, resource optimization, and high availability of the Cloudera platform.
- Collaborate with data architects, engineers, and business stakeholders to deliver end-to-end solutions in a telecom context.
- Support data migration or integration efforts with cloud platforms where applicable is added advantage.
Required Skills & Experience:
- 12–15 years of experience in Big Data engineering, with hands-on focus on on-premises data lake environments.
- Extensive Telecom domain knowledge – including data models and pipelines related to CDRs, BSS/OSS, customer and network data.
- Strong practical experience with:
- Cloudera (CDH/CDP) components: Spark, Hive, HDFS, HBase, Impala
- Kafka: configuration, topic management, producer/consumer setup
- Python for data transformations and automation
- Job orchestration via Oozie
- Access control and metadata management using Ranger
- Proficient in performance tuning, resource management, and security hardening of Big Data platforms.
- Experience with API development and integration for data services.
- Optional/Added Advantage: Exposure to cloud platforms (AWS EMR, Azure HDInsight, GCP Dataproc), hybrid architecture understanding.
Please send the update CV along with the following information to
If you are not interested, please refer to your friends.
Full Name:
Current Location:
Visa Status:
Total years of experience:
Relevant years of experience:
Current salary:
Expected Salary:
Notice period:
Reason for leaving:
About CLPS RiDiK
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
Thanks & Regards,
Rituparna Das
IT Recruiter
CLPS Inc.
| India HP/Whatsapp: +91
| India Office: + 65 68178695
#J-18808-LjbffrBig Data Developer
Posted today
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
Direct message the job poster from RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS)
Talent Acquisition Specialist at RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS) Connecting Top Talent with Leading CompaniesCompany : Ridik CLPS
Job Description:
We are seeking a Senior Big Data Engineer with deep expertise in on-premises Big Data platforms, specifically the Cloudera ecosystem, and a strong background in the telecom domain. The ideal candidate will have 12-15 years of experience building and managing large-scale data lakes, developing batch and real-time pipelines, and working with distributed data systems in telecom environments.
While the primary focus is on on-premises Cloudera-based data lakes, familiarity with cloud data services (AWS, Azure, or GCP) is considered an added advantage.
Key Responsibilities:
- Design, build, and maintain on-premises data pipelines using Apache Spark, Hive, and Python on Cloudera.
- Develop real-time data ingestion workflows using Kafka, tailored for telecom datasets (e.g., usage logs, CDRs).
- Manage job orchestration and scheduling using Oozie, with data access enabled via Hue and secured through Ranger.
- Implement and manage data security policies (Kerberos, Ranger) to ensure compliance and controlled access.
- Develop and expose REST APIs for downstream integration and data access.
- Ensure performance tuning, resource optimization, and high availability of the Cloudera platform.
- Collaborate with data architects, engineers, and business stakeholders to deliver end-to-end solutions in a telecom context.
- Support data migration or integration efforts with cloud platforms where applicable is added advantage.
Required Skills & Experience:
- 12-15 years of experience in Big Data engineering, with hands-on focus on on-premises data lake environments.
- Extensive Telecom domain knowledge - including data models and pipelines related to CDRs, BSS/OSS, customer and network data.
- Strong practical experience with:
- Cloudera (CDH/CDP) components: Spark, Hive, HDFS, HBase, Impala
- Kafka: configuration, topic management, producer/consumer setup
- Python for data transformations and automation
- Access control and metadata management using Ranger
- Proficient in performance tuning, resource management, and security hardening of Big Data platforms.
- Experience with API development and integration for data services.
- Optional/Added Advantage: Exposure to cloud platforms (AWS EMR, Azure HDInsight, GCP Dataproc), hybrid architecture understanding.
Preferred Qualities:
- Strong problem-solving and troubleshooting skills in distributed data systems.
- Ability to work independently in a fast-paced project environment.
- Effective communicator with both technical and business teams.
- Experience in mentoring junior engineers or leading small technical teams is a plus.
About CLPS RiDiK
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries Telecommunications
Referrals increase your chances of interviewing at RiDiK (a Subsidiary of CLPS. Nasdaq: CLPS) by 2x
Sign in to set job alerts for "Big Data Developer" roles. Python and Kubernetes Software Engineer - Data, AI/ML & Analytics Python and Kubernetes Software Engineer - Data, Workflows, AI/ML & Analytics Software Engineer - Data Infrastructure - Kafka Software Engineer - Data Infrastructure - OpenSearch/ElasticSearchWe're unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Big Data Expert – Cloudera & Cloud Platforms
Posted 2 days ago
Job Viewed
Job Description
Big Data Expert – Cloudera & Cloud Platforms
Company : Ridik CLPS
Job Description:
We are looking for a Senior Big Data Engineer with 12–15 years of experience in building and managing large-scale on-premises data lakes using the Cloudera ecosystem , particularly within the telecom domain . The role involves developing batch and real-time pipelines, optimizing distributed data systems, and ensuring secure, high-performance data infrastructure.
Key Responsibilities:
- Design and maintain pipelines using Spark, Hive, Python on Cloudera .
- Develop real-time ingestion workflows with Kafka (e.g., CDRs, usage logs).
- Manage orchestration with Oozie , access control with Ranger , and APIs for downstream integration.
- Ensure security compliance (Kerberos, Ranger), performance tuning, and resource optimization.
- Collaborate with cross-functional teams to deliver telecom-focused data solutions.
- Exposure to cloud migration/integration (AWS, Azure, GCP) is a plus.
Requirements:
- 12–15 years in Big Data Engineering with strong Cloudera expertise.
- Deep telecom domain knowledge (CDRs, OSS/BSS, network data).
- Hands-on with Kafka, Python, Hive, HDFS, HBase, Oozie, Impala .
- Skilled in data security, orchestration, and performance optimization.
- Cloud experience is an added advantage.
About CLPS RiDiK:
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
#J-18808-LjbffrBig Data Expert - Cloudera & Cloud Platforms
Posted 2 days ago
Job Viewed
Job Description
Big Data Expert - Cloudera & Cloud Platforms
Company : Ridik CLPS
Job Description:
We are looking for a Senior Big Data Engineer with 12-15 years of experience in building and managing large-scale on-premises data lakes using the Cloudera ecosystem , particularly within the telecom domain . The role involves developing batch and real-time pipelines, optimizing distributed data systems, and ensuring secure, high-performance data infrastructure.
Key Responsibilities:
- Design and maintain pipelines using Spark, Hive, Python on Cloudera .
- Develop real-time ingestion workflows with Kafka (e.g., CDRs, usage logs).
- Manage orchestration with Oozie , access control with Ranger , and APIs for downstream integration.
- Ensure security compliance (Kerberos, Ranger), performance tuning, and resource optimization.
- Collaborate with cross-functional teams to deliver telecom-focused data solutions.
- Exposure to cloud migration/integration (AWS, Azure, GCP) is a plus.
Requirements:
- 12-15 years in Big Data Engineering with strong Cloudera expertise.
- Deep telecom domain knowledge (CDRs, OSS/BSS, network data).
- Hands-on with Kafka, Python, Hive, HDFS, HBase, Oozie, Impala .
- Skilled in data security, orchestration, and performance optimization.
- Cloud experience is an added advantage.
About CLPS RiDiK:
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Central Bank of Oman is seeking to Recruit Competent, Committed, Self-motivated and Enthusiastic Omani Candidate as Data Engineer.
Job Purpose:
To design, build, and maintain robust data pipelines and infrastructure that enable the efficient acquisition, processing, and dissemination of high-quality data to support Central Bank of Oman’s regulatory, supervisory, and analytical functions. The Data Engineer will play a critical role in enabling data-driven decision-making through establishing and developing scalable solutions aligned with best practices in data governance, analytics, security, and compliance.
Key duties and Tasks:
- Design, develop, and maintain ETL/ELT pipelines to ingest, transform, and store structured and unstructured data from diverse internal and external sources.
- Collaborate with data scientists, analysts, and business units to understand data needs and implement appropriate solutions.
- Ensure data quality, integrity, and security across all data engineering processes and systems.
- Implement and optimize data warehousing and lake-house solutions for high-performance querying and analytics.
- Develop APIs or interfaces for secure data access and integration.
- Automate data workflows and monitor system performance and data pipeline health.
- Contribute to CBO’s data governance framework, including metadata management, data cataloguing, and lineage tracking.
- Document data architecture, pipelines, and procedures in accordance with CBO policies and compliance requirements.
- Stay abreast of emerging technologies, trends, and best practices in data engineering and financial regulation.
Minimum Qualifications and Experience:
- Bachelor’s degree in computer science, Information Systems, Software Engineering, Data Science, or a related field.
- 3 to 5 years of professional experience in a data engineering, data warehousing, or similar technical role.
Desirable Qualifications and Experience:
- Master’s degree in a relevant field (e.g., Data Engineering, Data Science, Financial Technology).
- Industry certifications such as:
- Google Cloud Professional Data Engineer or AWS Certified Data Analytics – Specialty or Microsoft Azure Data Engineer Associate
- Certified Data Management Professional (CDMP)
- Experience working in a financial institution, or regulatory agency.
- Exposure to financial data standards, reporting taxonomies (e.g., XBRL), and regulatory technology (RegTech and Subtech) solutions.
- Experience implementing data governance frameworks and working with data stewardship programs.
- Familiarity with business intelligence tools (e.g., Power BI, Tableau) for data consumption.
- Participation in cross-agency data sharing initiatives or regulatory reporting automation projects.
Please Submit your application before 23 July 2025.
#J-18808-LjbffrSenior Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Position - Senior Data Engineer
Location - Muscat, Oman
Experience - 6+ Years
Job Description -
Senior Data Engineer with strong expertise in Informatica (DEI, EDC, CDC), API development, and cloud platforms to support scalable, secure data integration in a telecom environment.
- 5+ years of data engineering experience.
- Hands-on expertise with Informatica DEI, EDC, CDC.
- Strong knowledge of cloud data services and ETL/ELT best practices.
- Proficient in SQL, REST APIs, and scripting (Python or Shell).
Key Responsibilities:
- Design and develop data pipelines using Informatica DEI, EDC, and CDC.
- Build and manage RESTful APIs for data integration and consumption.
- Deploy and optimize ETL workflows in cloud environments (AWS, Azure, or GCP).
- Collaborate with data architects and analysts to deliver high-quality, reliable data solutions.
- Ensure data governance and lineage using Informatica tools.
About CLPS RiDiK
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
#J-18808-LjbffrBe The First To Know
About the latest Spark Jobs in Oman !
Senior Data Engineer – Informatica & Cloud Specialist
Posted today
Job Viewed
Job Description
Senior Data Engineer – Informatica & Cloud Specialist
Company : Ridik CLPS
Skills : Informatica (DEI, EDC, CDC), API development, and cloud platforms
Job Description:
Senior Data Engineer with strong expertise in Informatica (DEI, EDC, CDC), API development, and cloud platforms to support scalable, secure data integration in a telecom environment.
Key Responsibilities:
- Design and develop data pipelines using Informatica DEI, EDC, and CDC.
- Build and manage RESTful APIs for data integration and consumption.
- Deploy and optimize ETL workflows in cloud environments (AWS, Azure, or GCP).
- Collaborate with data architects and analysts to deliver high-quality, reliable data solutions.
- Ensure data governance and lineage using Informatica tools.
Required Skills & Experience:
- 7+ years of data engineering experience.
- Hands-on expertise with Informatica DEI, EDC, CDC.
- Strong knowledge of cloud data services and ETL/ELT best practices.
- Proficient in SQL, REST APIs, and scripting (Python or Shell).
About CLPS RiDiK:
RiDiK is a global technology solutions provider and a subsidiary of CLPS Incorporation (NASDAQ: CLPS), delivering cutting-edge end-to-end services across banking, wealth management, and e-commerce. With deep expertise in AI, cloud, big data, and blockchain, we support clients across Asia, North America, and the Middle East in driving digital transformation and achieving sustainable growth. Operating from regional hubs in 10 countries and backed by a global delivery network, we combine local insight with technical excellence to deliver real, measurable impact. Join RiDiK and be part of an innovative, fast-growing team shaping the future of technology across industries.
#J-18808-LjbffrData Infrastructure Engineer
Posted 8 days ago
Job Viewed
Job Description
This role is exclusively for Omani nationals and requires candidates to be Omani citizens.
We are looking for a talented and experienced Data Infrastructure Engineer to join our team. This role focuses on building, deploying, maintaining, and optimizing data infrastructure that supports our data-driven operations. You will work with large-scale data processing systems, ensuring they are robust, scalable, and high-performing in production environments. The ideal candidate has strong technical skills in distributed data systems, particularly in deploying and managing these systems, and is passionate about creating efficient data workflows that drive actionable insights.
Key responsibilities- Design, develop, and maintain scalable data infrastructure solutions to support large-scale data processing and analytics.
- Deploy and configure distributed data systems, including data storage (e.g., HDFS, cloud storage) and data processing frameworks (e.g., Hadoop, Spark), ensuring they are resilient, optimised, and production-ready.
- Build and automate ETL workflows, managing data extraction, transformation, and loading processes to ensure data quality, consistency, and availability.
- Monitor the health and performance of data infrastructure, proactively troubleshooting and resolving performance bottlenecks and operational issues to maintain system stability.
- Optimise and scale infrastructure, leveraging containerisation (e.g., Docker) and orchestration (e.g., Kubernetes) to manage resources efficiently and support growing data volumes.
- Implement data governance practices and enforce best practices for data handling, quality, security, and compliance.
- Collaborate with data engineers, analysts, and cross-functional teams to understand data requirements and support data accessibility across the organisation.
- Stay updated on the latest trends and tools in data infrastructure, evaluating and recommending new technologies to enhance our capabilities.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 2+ years of experience in building, deploying, and managing data infrastructure, particularly in distributed data systems.
- Proficiency in big data technologies, including Hadoop, Spark, Hive, or related frameworks.
- Strong programming skills in languages such as Python, Java, or Scala.
- Hands-on experience with cloud platforms (e.g., Google Cloud, AWS, Azure) and cloud storage solutions.
- Knowledge of data formats such as Parquet, Avro, or ORC, and data querying tools like HiveQL.
- Familiarity with data pipeline orchestration tools, such as Apache Airflow or Luigi.
- Excellent problem-solving skills and the ability to work collaboratively within a team environment.
- Experience with additional data engineering tools like Apache Kafka, Apache NiFi, or Flume.
- Telecom industry experience, particularly in building data solutions for network performance or customer analytics.
- Familiarity with DevOps practices and CI/CD pipelines in a data engineering environment.
- Competitive Compensation : A comprehensive salary package with performance incentives.
- Career Development : Access to ongoing training, certifications, and professional development resources.
- Health and Wellness : Full health insurance, retirement plans, and wellness programs.
- Innovative Projects : Opportunities to work on high-impact data solutions with a talented, driven team.
- Cutting-Edge Environment : A collaborative culture that values creativity, learning, and innovation.
- Shape the future of digital ecosystems: Be part of a team that's redefining digital ecosystems management to make it intelligent, adaptive, and capable of supporting future demands.
- Innovate for impact: Work on cutting-edge technologies like AI, IoT, and data analytics to address real-world challenges in infrastructure.
- Empower smart cities: Contribute to building the foundation for cognitive cities - urban environments that are resilient, efficient, and adaptable.
- Grow with us: Join a dynamic, mission-driven team that values collaboration, innovation, and growth. We are committed to creating a workplace where you can thrive, learn, and make a meaningful impact.
Data Science Engineer
Posted 8 days ago
Job Viewed
Job Description
- Bachelor's degree in Computer Science or a related field.
- 1-2 years of experience in data analysis or data engineering.
Skills / Knowledge:
- Familiarity with programming languages like Python, R, or SQL.
- Solid understanding of statistical and machine learning concepts.
- Strong problem-solving and analytical skills.
- Excellent data visualization and presentation skills.
- Excellent communication and technical documentation abilities.
- Strong leadership and mentoring abilities.
- Ability to work collaboratively and learn quickly.
- Well-developed interpersonal skills and excellent communications skills in English.
- Respect & Integrity
- Problem Solving & Decision Making
The Junior Data Science Engineer supports data analysis projects by building and maintaining models, pipelines, and tools under the guidance of senior team members.
Key Accountabilities & Responsibilities- Assist in designing, developing, and implementing advanced data models and predictive analytics to solve business problems and drive decision-making.
- Build, maintain, and optimize scalable and efficient data pipelines to facilitate seamless data integration and processing across the organization.
- Conduct comprehensive exploratory data analysis to uncover insights, trends, and patterns, and present findings in a clear and actionable manner.
- Support the implementation and optimization of machine learning algorithms, ensuring they meet performance, accuracy, and scalability requirements.
- Work closely with cross-functional teams, including business stakeholders and senior data scientists, to understand requirements and deliver tailored data-driven solutions.
- Develop and maintain thorough documentation of data processes, models, and methodologies to ensure reproducibility and transparency.
- Prepare detailed reports and dashboards to communicate insights and progress to stakeholders.
- Stay up-to-date with emerging trends, tools, and methodologies in data science and analytics.
- Actively contribute to improving teamwork flows and adopting innovative approaches to solving challenges.