AI Data Engineering Lead
The position is described below. If you want to apply, click the Apply Now button at the top or bottom of this page. After you click Apply Now and complete your application, you'll be invited to create a profile, which will let you see your application status and any communications. If you already have a profile with us, you can log in to check status. Need Help? If you have a disability and need assistance with the application, you can request a reasonable accommodation. Send an email to Accessibility (accommodation requests only; other inquiries won't receive a response). Regular or Temporary: RegularLanguage Fluency: English (Required) Work Shift: 1st shift (United States of America)Please review the following job description:The AI Data Engineering Lead designs, builds, and maintains the data pipelines, ingestion frameworks, transformation logic, and governed data services that power AI-enabled applications, agentic systems, analytics workflows, and enterprise reporting at The Forge. This is a hands-on engineering role focused on building reliable, scalable, and observable data infrastructure. The engineer works across ingestion, transformation, storage, retrieval, and delivery layers ensuring that the right data reaches the right systems in a governed, auditable, and production-ready state. Daily work includes building and maintaining ETL/ELT pipelines, integrating enterprise data sources, implementing data quality and validation logic, supporting AI and agentic retrieval patterns, managing data contracts and schemas, and partnering with engineering, product, and analytics teams to deliver data that is clean, current, trustworthy, and useful.ESSENTIAL DUTIES AND RESPONSIBILITIES Following is a summary of the essential functions for this job. Other duties may be performed, both major and minor, which are not mentioned below. Specific activities may change from time to time. Design, build, and maintain data pipelines, ingestion workflows, and transformation logic that deliver clean, governed, and reliable data to AI systems, analytics tools, and enterprise consumers. Integrate enterprise data sources including structured databases, APIs, event streams, file systems, and cloud data services into the Forge data ecosystem using approved patterns. Implement data quality checks, validation logic, schema enforcement, and lineage tracking to ensure data entering AI and analytics systems is accurate, complete, and auditable. Build and maintain data models, transformation layers, and serving structures that support AI grounding, retrieval-augmented generation (RAG), agent memory, vector indexing, and analytics delivery. Support the design and implementation of data contracts between upstream producers and downstream consumers including AI agents, applications, dashboards, and reporting tools. Optimize pipeline performance, reliability, and cost across batch, streaming, and event-driven data movement patterns. Instrument data pipelines with observability, alerting, and monitoring so that data failures, quality degradations, and schema drift are detected and resolved quickly. Partner with agentic engineering, application, platform, security, and QA teams to ensure data is delivered in formats, cadences, and access patterns that support production AI workflows. Maintain documentation for pipelines, data models, integration specifications, data dictionaries, and operational runbooks. Continuously improve data engineering practices, tooling, and automation as Forge AI capabilities and data volume scale.
QUALIFICATIONS Required Qualifications: The requirements listed below are representative of the knowledge, skill and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. 1. Bachelor's degree with minimum ten years of prior relevant experience in IT field, including Cybersecurity 2. Direct experience with financial services institutions as well as demonstrable experience in the protection vertical relevant data types (PII, PHI) and legal requirements (HIPAA, etc.) 3. Ability to evaluate the cyber risk of technical solutions through the analysis of architectural documents. 4. Ability to relate business requirements and risks to technical controls, systems and processes 5. Highly adaptable to a constantly changing business and technology environment 6. Strategic thinker with big picture perspective and a broad understanding of information security, risk management, and their direct applications to business process 7. Excellent leadership skills with the ability to leverage cross-functional teams to meet defined objectives 8. Outstanding executive presentation and communication skills 9. Exhibit extraordinary thought leadership, influencing and problem resolution skills
Preferred Qualifications: 3+ years of data engineering experience building and supporting production data pipelines, ETL/ELT workflows, and enterprise data services. Strong programming ability in Python and SQL, with working knowledge of modern data transformation and orchestration frameworks. Experience integrating structured and unstructured data sources including relational databases, APIs, cloud storage, and event streaming platforms. Experience implementing data quality, validation, schema management, and lineage tracking for production data systems. Familiarity with cloud-native data platforms, managed data services, and modern data warehouse or lakehouse architectures. Experience with CI/CD-aligned data delivery, version control, and engineering best practices for data infrastructure. Ability to work across data producers and consumers including AI systems, analytics platforms, and business applications. Strong communication skills and ability to work effectively in cross-functional enterprise delivery teams. Experience building data infrastructure that supports AI/ML pipelines, vector retrieval, RAG patterns, or agent memory systems. Experience with Microsoft Fabric, Azure Data Factory, Azure Synapse, Azure AI Search, or comparable enterprise data platforms. Experience with streaming data patterns using event-driven architectures or platforms such as Kafka, Event Hubs, or equivalent. Experience in financial services, cybersecurity, or other regulated enterprise environments with strong data governance and audit requirements. Familiarity with data mesh, data contract, or federated data ownership patterns in large enterprise organizations. Experience with observability tooling, pipeline monitoring, and data quality frameworks for production data systems.
General Description of Available Benefits for Eligible Employees of Truist Financial Corporation: All regular teammates (not temporary or contingent workers) working 20 hours or more per week are eligible for benefits, though eligibility for specific benefits may be determined by the division of Truist offering the position.Truist offers medical, dental, vision, life insurance, disability, accidental death and dismemberment, tax-preferred savings accounts, and a 401k plan to teammates. Teammates also receive no less than 10 days of vacation (prorated based on date of hire and by full-time or part-time status) during their first year of employment, along with 10 sick days (also prorated), and paid holidays. For more details on Truists generous benefit plans, please visit our Benefits site. Depending on the position and division, this job may also be eligible for Truists defined benefit pension plan, restricted stock units, and/or a deferred compensation plan. As you advance through the hiring process, you will also learn more about the specific benefits available for any non-temporary position for which you apply, based on full-time or part-time status, position, and division of work. Truist is an Equal Opportunity Employer that does not discriminate on the basis of race, gender, color, religion, citizenship or national origin, age, sexual orientation, gender identity, disability, veteran status, or other classification protected by law. Truist is a Drug Free Workplace. EEO is the Law E-Verify IER Right to Work
|